vlsi commented on PR #133: URL: https://github.com/apache/xalan-java/pull/133#issuecomment-1836520941
> Is this nitpicking some kind of addiction for you? Can you please do something productive instead? Thanks, much obliged. Just save your energy for reviews of actually bad code, maybe. @kriegaex , first, I truly do not like when people accuse me or say in public that my words are "not true". Sure I can live with that, however, I would appreciate it if you withdraw your "not true". Sure I can make mistakes, and it would be great to learn what I miss. However, in this case, I am extremely confident that my claim was true. Thank you. Currently, it looks like you accuse me of spreading false information and causing spam of messages while the core of the issue is that it was you who claimed "not true". Second, I started this with "This looks great to me". If you watch for a literal pattern "this is good to merge", ok, I can add it every time, however, I was 100% sure my first comment was already like "fine, there are small comments". --- > The extensive logs you complain about do not come frommy changes but from stuff that was put in place by someone else and exists without this PR, too @kriegaex , please consider that stacktraces with `java.lang.RuntimeException: Cannot read properties file to extract Xalan version number information: ` were added **by the current PR**. Those stacktraces will easily be misleading as it would take time to tell if the stacktrace comes from a true failure or if the stacktrace comes from a test that just happens to mock a failure. Do not get me wrong, but it adds ~200 lines to the log output for the stacktraces alone (one for xalan, and another for serializer module). I did mention that stacktraces from successful test exceptions are unhelpful and distracting, and I do not see why you suggest "open the raw log without fancy HTML" instead of tuning down the unwanted messages. There's no point in printing exceptions in case the test was successful. >only the logged exceptions for the reproduced error cases, which are helpful context info. Please, they are not helpful. They are distracting. Imagine we will have more tests in the future. Imagine we have 50 exceptions printed (e.g. because we test positive and negative cases). Imagine one of those 50 exceptions is a new unexpected failure. How would you identify the true failure from the ones that come from the successful executions? Please, can we avoid polluting the log? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@xalan.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@xalan.apache.org For additional commands, e-mail: dev-h...@xalan.apache.org