Re: Support for insecure applications

2021-02-12 Thread Carles Pina i Estany


Hi,

On Feb/12/2021, Sylvain Beucler wrote:
> Hi,
> 
> On 12/02/2021 01:17, Carles Pina i Estany wrote:
> > When I was discussing this with a friend I had thought if Debian could
> > make available and visible for the users some metrics, contextualised in
> > similar (per functionality) packages:
> > 
> > -popularity
> > -number of recent updates in upstream
> > -number of contributors
> > -usage of control version system
> > -test coverage
> > -continous integration
> > -upstream activity (issues, PRs, etc. with more the better GitHub or
> > similar places stars, forks, etc?)
> > -translations? (the more, more popualar the software is?)
> > -warnings from the compilers?
> > -static code analyser?
> > -documentation?
> > -CVEs?
> 
> Almost none of these relate to software _security_.

You are right, I was thinking on software quality hoping that security
would come along in the majority of cases.

> Let's keep in mind that active/popular software are often the ones
> with the earlier Time-To-Market, at the expense of security (check the
> history of PHP or Docker for instance).

Yep, in number of items in my list I realise that it seems more like a
popularity contest (it wasn't what I was thinking and Popularity Contest
might be enough for this).

I've read Paul Wise's email in this thread and I'll follow the links and
project. I was thinking thinking on something along that lines but to
give information to the final users. I'm interested in the checks that
are already included there and see if they match the checks that I do
when choosing software.

We all decide to use A over B (e.g. to use pwgen password generator
instead of one of the other at least 5 similar ones in Debian; or to use
geeqie file image viewer instead of another one...) and having more
information when choosing software might help taking better informed
decisions. Perhaps it's not even about software quality but more
general.

Since probably my thoughts about metrics to help deciding packages might
be off-topic here I'll move my thoughts to a better venue.

Cheers, thanks for answering!

-- 
Carles Pina i Estany
https://carles.pina.cat



Re: Support for insecure applications

2021-02-11 Thread Carles Pina i Estany


Hi,

On Feb/12/2021, Brian May wrote:

[...]

If this is off-topic in the list feel free to answer to me only,
redirect to another mailing list and apologies for the noise.

> But I am not sure that treating all software as equal, when it obviously
> isn't, is a good thing for our users.


> Yes, users can look up our security trackers, not sure how much this
> helps though. A lot of these open security issues aren't necessarily
> serious issues that warrant concern.
> 
> Any ideas, comments?

Some months ago I was thinking something along the same lines. I was
comparing the source code of certain packages (10 packages more or less)
searching for certain bugs but I saw how packages that might look
similar from the outside varied a lot in quality, maintenance, etc. The
problem is that from a user point of view it would be hard to know which
one is better maintained.

I think that when a package is distributed in Debian some users might
expect certain quality because the package is in Debian.

When I was discussing this with a friend I had thought if Debian could
make available and visible for the users some metrics, contextualised in
similar (per functionality) packages:

-popularity
-number of recent updates in upstream
-number of contributors
-usage of control version system
-test coverage
-continous integration
-upstream activity (issues, PRs, etc. with more the better GitHub or
similar places stars, forks, etc?)
-translations? (the more, more popualar the software is?)
-warnings from the compilers?
-static code analyser?
-documentation?
-CVEs?

So, when a user chooses a package the user would have more information
to decide

I was trying to think of metrics that could be automated (to a certain
extend) to avoid flamewars between reviewers and upstream but metrics
that might indicate software quality (hard to measure!). Another option
would be like science publications: reviewers reviewing the code and
rating it for different aspects but I agree that might be a never ending
story, unless there is a clear cut.

The Journal of Open Source Software (https://joss.theoj.org/) has a
review criteria:
https://joss.readthedocs.io/en/latest/review_criteria.html ,
https://joss.readthedocs.io/en/latest/review_checklist.html but some are
still subjective, IMHO.

May I ask: how do people choose (security wise or in general) between
packages for a certain task? Could this be automated? Part of the
process for my decision is seen above, plus looking at dependencies and
sometimes at source code.

Cheers,

-- 
Carles Pina i Estany
https://carles.pina.cat