Hi Paul,

Thank you for taking the time to share these detailed recommendations. They’re 
very helpful, and we’ll incorporate them into the upcoming proposal.

Best regards,
Adam


> On 2025. Aug 26., at 15:42, Paul <pchristi...@gmail.com> wrote:
> 
> Thanks James, 
> 
> Maybe a Starting Place / Plan Rule Set: Below rule sets, IMO should provide a 
> starting point with intentional simplification as its basis. A Cognitive 
> Complexity Setting of "12", is on the lower end of "moderate complexity". 
> Benefit: Helps with code simplification today and speeds up future issues 
> which may have dependency on it. Smaller, tighter code, is a also a great way 
> to reduce technical debt.Think of this as a speed limit, that helps keep 
> contributions a little smaller, cleaner and tighter so that more community 
> members can understand the code. 
> Tip: A good process should help manage "the thing", but should also help 
> simplify the future. Step by small positive step. 
> 
> Other suggested rules should apply ONLY to new or changed code en a pull 
> request.
> 
> Bugs: Must be 0.   Why: Bugs found by static analysis are often critical and 
> should never be introduced into the main branch. Non-negotiable rule.
> 
> Vulnerabilities: Must be 0.   Why: Similar to bugs, security vulnerabilities 
> should be blocked immediately to protect the project.
> 
> Maintainability Rating: Should be 'A'.  Why: This metric helps control code 
> smells and ensures new code is easy to read and maintain. (also a tech debt 
> prevention technique)
> 
> Code Coverage: Minimum of 70% on new code.  Why: This sets a realistic 
> expectation for testing new functionality. This value can be adjusted upward 
> as the community matures testing practices.
> 
> Duplicated Lines: Maximum of 3% on new code.  Why: Prevents introduction of 
> redundant code and encourages refactoring.
> 
> 
> Other Rules to Discuss. 
> 
> 
> Deprecated Code Usage: Create a custom rule to flag the use of deprecated 
> methods, such as those from older Java versions. This will highlight areas 
> that need to be refactored over time without blocking new features. Maybe 
> like JAVA.until.date or similar but simple rule, to help identify dead or 
> stagnant code which should not be supported forward ?
> 
> BONUS:  Who ever steps up and successfully completes the issue, will receive 
> a totally unofficial but humorous, Open Source Seal of Approval logo. SWAG / 
> Bling you may share on your CV, Bio or Social Media account as proof of 
> contribution!
> 
> Paul
> 
> 
> On Mon, Aug 25, 2025 at 9:43 PM James Dailey <jdai...@apache.org 
> <mailto:jdai...@apache.org>> wrote:
>> Paul et al. - That is the question I am asking.  If no one is willing to 
>> step up and work with Apache infra and our specific config, then this should 
>> be pruned.  It currently has no value and suggests something unhelpful. 
>> 
>> BUT, It is freely offered by SonarCloud to the Apache Software Foundation 
>> (ASF), so it is of potential value if we have someone managing it. 
>> The account is managed by the ASF Infra team.  
>> 
>> The current command used is 
>>  Run as: gradle clean bootRun
>> 
>> IF you look at that gradle task in our github, I would guess/suppose that it 
>> is insufficient for the SonarQube needs.   
>> There is a need to look at the gradle tasks but also the current SonarQube 
>> config.  e.g. source files for the test coverage may be located in a 
>> setting, specific to Fineract config.  
>> or e.g. set the evaluation look back to 90 days or ? 
>> 
>> If someone wants to look at this and figure out what's not working, 
>> specifically and for Fineract, please do so.    See the rest of this thread, 
>> prior postings, for more information. 
>> 
>> Thanks, 
>> 
>> 
>> On Mon, Aug 25, 2025 at 6:43 PM Paul <pchristi...@gmail.com 
>> <mailto:pchristi...@gmail.com>> wrote:
>>> An unconfigured SonarQube has no value. 
>>> The current situation, where it's not running on PRs (pull requests) and 
>>> shows zero coverage, means we're not getting any of the insights it's 
>>> designed to provide. (non-tech explanation)
>>> 
>>> Action Steps:
>>> Yes / No to use SonarQube.
>>> IF NO, remove SonarQube, it produces no value as is and STOP HERE.
>>> IF YES, use the following list as an activation process, so that SonarQube 
>>> is active and produces accurate reports.
>>> Generate SonarQube Token: To generate a SonarQube token, create a security 
>>> token within your SonarQube server instance. This is a one-time process, 
>>> and the token will be used to authenticate your CI/CD pipeline with the 
>>> SonarQube service.
>>> Add Token in Repository Secrets: Store the generated token as a secret in 
>>> your project's GitHub repository. This protects the token from being 
>>> exposed in public code.
>>> Create a GitHub Actions Workflow: Write a .yml file in your repository's 
>>> .github/workflows directory. This file will define the automated job that 
>>> runs SonarQube analysis on every pull request.
>>> Configure Build Tool: Ensure your project's build tool (e.g., Maven, 
>>> Gradle) is correctly configured to generate the necessary reports, such as 
>>> code coverage reports from the required tool. SonarQube uses these reports 
>>> to gather its metrics.
>>> Define the pass/fail criteria to block the introduction of new issues on 
>>> New code. The essential metrics to include are:
>>> Bugs: Fail the gate if any new bugs are found.
>>> Vulnerabilities: Fail if any new security vulnerabilities are detected.
>>> Code Coverage: Set a mandatory minimum percentage of unit test coverage on 
>>> new code.
>>> Maintain: Block new code smells above a defined threshold.
>>> How to videos:  
>>> https://youtu.be/JocHmIZ9c_U 
>>> SonarCloud, but should be insightful.
>>> https://youtu.be/zDkcffDsi24 
>>> 
>>> Hope this is useful, 
>>> Paul
>>> 
>>> On Mon, Aug 25, 2025 at 7:55 PM James Dailey <jdai...@apache.org 
>>> <mailto:jdai...@apache.org>> wrote:
>>>> Bringing this up again -  This is always failing.  Can we find a way to 
>>>> configure that so the quality gate config includes the test coverage we do 
>>>> have?  
>>>> 
>>>> 
>>>> 
>>>> On Tue, Jul 15, 2025 at 8:57 AM Ádám Sághy <adamsa...@gmail.com 
>>>> <mailto:adamsa...@gmail.com>> wrote:
>>>>> Hi Adam
>>>>> 
>>>>> Well, there are many moving parts here:
>>>>> 
>>>>> 1. Sonarqube report can easily be misleading:
>>>>>  It got executed only on the `develop`
>>>>> - 0% coverage shows the last 30 days I believe, so in the last 30 days 
>>>>> there was 0 unit test written.
>>>>> - In my understanding - but i might be wrong - sonarqube marks it as 
>>>>> failed, if the metrics (coverage, bugs, duplication, etc.) got worse than 
>>>>> before…
>>>>> 
>>>>> 2. Since it got not executed on PRs automatically, we always know the 
>>>>> outcome only after the merge whether it got better or worse… 
>>>>> We can consider changing on this and add sonarqube metrics as one of the 
>>>>> acceptance criteria of a PR.
>>>>> 
>>>>> I dont know whether someone is reviewing actively, probably we should… at 
>>>>> least before new release maybe?
>>>>> 
>>>>> I hope it helps!
>>>>> 
>>>>> Regards,
>>>>> Adam
>>>>> 
>>>>> > On 2025. Jul 15., at 17:41, Adam Monsen <meonk...@apache.org 
>>>>> > <mailto:meonk...@apache.org>> wrote:
>>>>> > 
>>>>> > It looks like the sonarqube quality gate always fails.
>>>>> > 
>>>>> > Something seems broken with the coverage metric. If you click "Show 
>>>>> > Older Activity" at 
>>>>> > https://sonarcloud.io/project/overview?id=apache_fineract every commit 
>>>>> > says "0.0% Coverage".
>>>>> > 
>>>>> > I only see the main branch and no PRs at 
>>>>> > https://sonarcloud.io/project/overview?id=apache_fineract .
>>>>> > 
>>>>> > More broadly: I'm wondering if the sonarqube tool/integration is useful 
>>>>> > and/or actually being used with/for Fineract.
>>>>> > 
>>>>> 
>>> 
>>> 
>>> 
>>> --
>>> --
>>> Paul
> 
> 
> 
> --
> --
> Paul

Reply via email to