linguini1 commented on PR #18340:
URL: https://github.com/apache/nuttx/pull/18340#issuecomment-3843280026

   > Let's summary how Xiaomi do the test internally before we upstream patch 
to community:
   > 
   >     1. Each patch must pass CT(1000+ test cases) before it can be merged
   > 
   >     2. DT(10000+ test cases)  run every day to find the error which 
doesn't cache by CT
   > 
   >     3. Run monkey test on 100 units sim/qemu/device more than 24 hours 
without crasah/hang
   > 
   >     4. 10000+ manual test which run by tester manually before we make an 
official release every two/three months
   > 
   > No test log doesn't mean the patch doesn't test or verifier. Most patch 
Xiaomi upstream is made in the last year and already pass the aboved test.
   
   This is great. I understand that it's probably frustrating to have to show 
more testing when you upstream to NuttX with such a thorough internal review. 
However, NuttX has its own review requirements. We don't know what test cases 
you ran, in what environments, etc. I would assume a lot of test cases are run 
on a Vela version of NuttX. If your review process is so strict, maybe you 
could just incorporate saving some log output from these 10000+ manual tests so 
that you could include them in the NuttX PRs?
   
   > Many bugs only happen in the special timing or scenario like #18266? when 
the error happen there is no log at all, how can @Otpvondoiatsdo provide you 
log? Do you print log after each statement? It's stupid to ask the log for each 
patch without look at the change!
   
   I don't understand why it's stupid. If you discover a deadlock issue, there 
must be some scenario in which you encountered. If it's not reproduce-able, 
that can be stated in the testing information. In this particular PR, the 
author told me that logs did exist, just that they were too old and lost. From 
my perspective, that means you have a way to obtain them, so that's what I 
requested. How am I supposed to read your mind about not being able to 
reproduce something?
   
   Yes, usually when I make a change, I have some kind of output that lets me 
verify that it worked and solved the problem. Sorry if that's not part of your 
process.
   
   > All patch from Xiaomi team already reviewed by me many round before they 
can be merged into our internal repo, that's why I just approve them directly.
   
   Cool, then you are not following the review process from the contributing 
guidelines. It doesn't matter how many times you've reviewed them, they must 
follow the contributing guidelines.
   
   > Please do the constructive review, like @acassis @jerpelea (e.g. improve 
the pr organization, fix the commit message, or add the documentation).
   
   I am! I politely ask for test logs as per the contributing guidelines to 
make sure that all changes are appropriately tested!
   
   > BTW, since the recent contribution rule change, Xiaomi team already 
decides ALL new features and components which doesn't tightly couple with NuttX 
will never be upstreamed in the future.
   
   Okay? The NuttX community came together and decided on contributing 
guidelines as a community. We made these changes in a public forum of which you 
are a part of, and voted in a capacity where you have a binding vote as a PMC 
member. That is the avenue by which you were able to voice these concerns. Now 
these guidelines are here, and it is up to contributors to follow them. 
Otherwise, reviewers are able to request changes.
   
   > If you still do review like this, my team will leave NuttX community and 
never upstream anymore.
   
   Okay, that is your decision. I will not pretend like Xiaomi doesn't 
contribute lots of valuable work to NuttX, but we have to hold Xiaomi PRs to 
the same standard as everyone else. NuttX is not a Xiaomi product. It's 
purposely set up so that companies with vested interest cannot just test 
"internally" and merge their own PRs without demonstrating to the entire NuttX 
community that their change does what it says it does.
   
   Really, I can understand the frustration of having to show more testing if 
you've already automated it internally. @acassis is right that I am requesting 
test logs for the NuttX community, not because of some personal vendetta 
against you or Xiaomi. But these logs are necessary.
   
   Please see it from our perspective:
   
   - We have contributing guidelines that were voted on
   - PRs that do not adhere to these guidelines get a change request
   - There are maybe ~8 active reviewers on NuttX who *aren't* affiliated with 
Xiaomi, responsible for reviewing all of the PRs submitted by a company with 
many more salaried employees and who submits ~20 patches a day in the last 
couple weeks
   - Out of the recent patches being submitted, at least 20 contained complete 
false testing information that was AI-generated
   
   We don't have great CI infrastructure, we have limited ability and time to 
review and test changes ourselves, we have limited CI resources and we don't 
have perfect, easy-to-run test cases that we can just automate to verify every 
PR with a change. Most of all, none of us are salaried full-time workers for 
NuttX! How are we meant to keep up with reviewing constant changes from a 
company with a development team? Please try to facilitate the reviews by 
following the contributing guidelines and giving us the information we need to 
make an informed review. 
   
   We cannot just "trust" Xiaomi contributors; it's not fair to other 
contributors or the community members who voted for the contributing 
guidelines. It is also hard to trust these contributors when they are now 
regularly submitting false information in their PRs. I hope you understand.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to