Hi Jiawei, Thanks for bringing this up.
AI tools can certainly lower the barrier for users to get started, which is positive. At the same time, we do hope users gradually build a real understanding of BifroMQ and contribute issues grounded in concrete usage scenarios. Otherwise, it may feel like the community is effectively validating or tuning someone’s AI agent rather than improving the project itself. Introducing a lightweight workflow to filter issues sounds reasonable, and starting by enforcing a clearer issue format (e.g. usage context and reproduction steps) seems like a good and pragmatic first step. Happy to discuss further. Best, On Fri, Jan 9, 2026 at 10:53 PM gujiaweijoe <[email protected]> wrote: > Hi all, > > As BifroMQ continues to gain popularity, we have seen a growing number of > issues being raised on our GitHub repo by the community, which is great. > > At the same time, I’ve noticed that some recently reported issues appear > to be generated primarily by AI tools rather than based on hands-on usage > or concrete reproduction efforts, e.g., > https://github.com/apache/bifromq/issues/218 and > https://github.com/apache/bifromq/issues/202. > > To help keep the issue tracker focused and effective, I’d like to start a > discussion about introducing a review workflow for newly submitted issues > after the first release. The goal would be to encourage reports that are > grounded in real usage and include clear reproduction steps, and to filter > out reports that appear to be largely AI-generated without practical > validation. > > Any insights or comments about it? > > Best regards, > Gu Jiawei(gujiaweijoe) > -- Yonny(Yu) Hao
