Hi all,

As BifroMQ continues to gain popularity, we have seen a growing number of 
issues being raised on our GitHub repo by the community, which is great.

At the same time, I’ve noticed that some recently reported issues appear to be 
generated primarily by AI tools rather than based on hands-on usage or concrete 
reproduction efforts, e.g., https://github.com/apache/bifromq/issues/218 and 
https://github.com/apache/bifromq/issues/202.

To help keep the issue tracker focused and effective, I’d like to start a 
discussion about introducing a review workflow for newly submitted issues after 
the first release. The goal would be to encourage reports that are grounded in 
real usage and include clear reproduction steps, and to filter out reports that 
appear to be largely AI-generated without practical validation.

Any insights or comments about it?

Best regards,
Gu Jiawei(gujiaweijoe)

Reply via email to