You could trigger the test/coverage build using Parameterized Trigger plugin, 
specifying the node name and workspace path the compilation ran on (i.e. the 
rsync source host and folder) as parameters.

If you're using host names for node names, you're done, otherwise, you need to 
do a little mapping in your script (or otherwise specify the host name as 
parameter).

Another option might be to try creating an archive (tar, zip, ...) from the 
sources before archiving, or using the commercial Cloudbees Fast Archiving 
plugin.

On 28.08.2013, at 23:10, Avihay Eyal <avihay.e...@gmail.com> wrote:

> Hi, I have a job that builds a debug version, and a job that runs regression 
> tests and publish code coverage. The code coverage (gcovr) needs access to 
> the code base itself,
> which is close to 3 GB. I've tried archiving the workspace in the build job, 
> and using that archive in the regression job, but the archiving takes 
> forever...
> 
> So I want to use rsync from the build machine to the regression machine, is 
> that the best practice for the situation I described? To me it seems that the 
> drawback is that I'm using the actual
> machine names, when doing the rsync, therefor eliminating the abstraction of 
> jobs and nodes...
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Jenkins Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to jenkinsci-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to