[ 
https://issues.apache.org/jira/browse/YETUS-441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16481811#comment-16481811
 ] 

Sean Busbey commented on YETUS-441:
-----------------------------------


{quote}
The dependency download job runs on node A.

The precommit job runs on node B.
{quote}

This is exactly why I want to use the job. No matter what node the precommit 
job runs on, we can use something like [the copy artifact 
plugin|https://wiki.jenkins.io/display/JENKINS/Copy+Artifact+Plugin] to get the 
last successful artifact. I'm reasonably sure that'll be faster than the 
inconsistent times I've gotten running a fresh download of all the different 
files.

{quote}
The --mvn-custom-repos-dir series of options for a different location than 
~/.m2 to store the maven repo. Theoretically, we could use it to also stuff 
away from other data. This would allow us to copy the database to a well-known 
location. The first run would take the hit but subsequent runs would be able to 
use the cached copy. All without using a secondary job to populate it.
{quote}

I'm fine with this as an option for folks if they want it, but when I use this 
in HBase I don't want any jobs taking a ~20-30 minute hit while the plugin does 
its thing to get all the different source files together. Especially since 
they'll have to do it any time they get a custom repo on a new executor.

The second job is pretty simple. Jenkins gives us tools already for relying on 
it as a building block. If it's too distasteful for Yetus to keep around for 
general use of ASF Projects I'll happily just isolate it for HBase.

{quote}
One other thing I'm not sure about is how to know if the cached copy is old. It 
would be nice if both an inline version and the external version could detect 
whether or not the file even needs to get downloaded.
{quote}

The tool that does the updating already does incremental updates. It also 
refuses to do anything if it's been less than some delta (I think 4 hours). 
This is why I've capped the jenkins job at ~6 executions a day.

I don't know if the jenkins copy artifact plugin handles caching, but 
personally I'm willing to file an improvement request if it doesn't since that 
seems like where that work should be happening.

{quote}
There's also the issue of signing. How do we know if what we got is actually 
legit?
{quote}

Signing which thing? the OWASP project doesn't sign their artifacts. All we 
have to go on is that it's an HTTPS URL with a well known CA (which I know is 
precious little). AFAICT they similarly rely on HTTPS to a well known CA for 
talking the various data sets they rely on for data files. The lack of security 
around this process is a bit out of scope IMHO for "make use of this tool in a 
yetus plugin".


> Add a precommit check for known CVEs from dependencies
> ------------------------------------------------------
>
>                 Key: YETUS-441
>                 URL: https://issues.apache.org/jira/browse/YETUS-441
>             Project: Yetus
>          Issue Type: New Feature
>          Components: Test Patch
>            Reporter: Sean Busbey
>            Assignee: Sean Busbey
>            Priority: Major
>         Attachments: YETUS-441.0.patch, YETUS-441.1.patch, YETUS-441.2.patch, 
> YETUS-441.3.patch, dependency-check-suppression.xml
>
>
> Add in a precommit test that makes use of [The OWASP Dependency 
> Check|https://www.owasp.org/index.php/OWASP_Dependency_Check] to look for 
> known bad dependencies.
> there's a maven plugin, ant task, and command line tool. So we should be able 
> to build similar support to what we have for RAT.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to