Re: Anything wrong with storing expanded plugin data in var cache?

2018-03-26 Thread Samuel Van Oort
I think it's a genuinely great proposal and it sounds like most of the 
things I'd be concerned about have already been thoroughly considered and 
addressed effectively.  There are two remaining points to consider:

I think it might be desirable to give plugins some mechanism to interact 
with the Gitignore (or an equivalent) -- at least for cases where there's a 
non-standard storage mechanism that needs to be included in the SNAPSHOT 
(or excluded).  Extra flexibility will also help with addressing any 
edge-case issues that might crop up, and enables the project to push some 
of the pain onto plugin maintainers if they're (ab)using the Filesystem in 
non-idomatic (for Jenkins) ways that are hard to support.  I say that in 
the nicest way, hoping that I'm not a guilty party.

Also, I do worry a bit that we might encounter compatibility issues when 
separating the job config from their builds -- probably plugins doing 
something relative to the ITEM directory (which is *wrong*, but could still 
happen).  I know some of our senior Jenkins engineers don't see any issues 
with breaking plugins that are Doing It Wrong (tm), but generally I'd like 
to see some testing across different plugins and a healthy beta period with 
*healthy 
in-the-wild-use *just to ensure positive user experiences.  Relocating 
build records isn't done as often aside from expert installations, and less 
experienced users won't be able to distinguish between "Evergreen broke me" 
and "some random plugin broke because that wasn't respecting the build 
record directory."  Also this burn-in-testing period will help us identify 
and build out the rules for what to include/exclude in snapshots.

Bonuses: I like *a lot* that we're limiting the scope to just 
upgrade/downgrade and not trying to use this as a general-purpose backup.  
It might even be desirable to limit the Git history in some fashion to 
prevent it becoming bloated.  

In general this is a well-considered and valuable addition to the Jenkins 
ecosystem.  Should generally help with performance -- especially if we end 
up splitting into a couple volumes down the road which can be relocated to 
faster/slower performance as needed.

On Monday, March 26, 2018 at 8:49:17 AM UTC-4, Stephen Connolly wrote:
>
> I see nothing wrong with it... but I would say that having added the 
> --pluginroot option myself iirc ;-)
>
> On 24 March 2018 at 00:29, Sam Gleske  
> wrote:
>
>> I normally store expanded plugin metadata within 
>> /var/cache/jenkins/plugins similar to how WAR filre metadata is stored in 
>> /var/cache/jenkins/war.
>>
>> Is there any particular reason the Jenkins packages don't do this?  Are 
>> there any drawbacks?  I'm curious if others have any opinions on this.
>>
>> I've been running Jenkins quite a while like this.
>>
>> [1]: 
>> https://github.com/samrocketman/jenkins-bootstrap-shared/blob/1a409cad89007f56ed36342427b767dd4e88fbd9/packaging/docker/run.sh.in#L38
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "Jenkins Developers" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to jenkinsci-de...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/jenkinsci-dev/05beb1e5-a824-428c-b980-07a4e343acdc%40googlegroups.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/1434c647-7743-4cb7-9d18-d1d58beb7e95%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Essentials] Collecting usage telemetry from Jenkins Pipeline

2018-03-19 Thread Samuel Van Oort
I'm glad my mega-reply did turn up after all (just belatedly), anyway, 
replies inline!

The existing Metrics plugin is using the Dropwizard interface, so:
 

> say, there isn't a world in which I wouldn't use statsd for this :) My 
> current 
> thinking is to incorporate the Metrics plugin 
> (https://plugins.jenkins.io/metrics) in order to provide the appropriate 
> interfaces, and if that's fine, then I would have no qualms with that 
> becoming 
> a dependency of Pipeline itself. I need to do some research on what 
> Dropwizard 
> baggage might be unnecessarily added into Jenkins. 


Basically means "just make it a normal Metric", or at least that's my 
intent.  I think it might make sense to make extra metrics like this not a 
*dependency* of Pipeline (since it would actually need to *depend* on parts 
of it), but an additional plugin in the Aggregator or part of some sort of 
Essentials plugin.   Which I probably explained incoherently due to have my 
head still buried in the gnarly guts of workflow-cps. 

As far as statsd goes: the Metrics interfaces are reporter-agnostic, so 
they don't care if you're fetching Metrics to Graphite, some sort of 
proprietary Analytics solution, StatsD, etc -- as long as you have a 
Reporter implementation.  The Metrics Graphite Plugin gives some idea how 
simple it can be to wrap an existing Reporter Impl with configuration for 
Jenkins: https://github.com/jenkinsci/metrics-graphite-plugin

*To Jesse:*

Actually you _can_ get this from the flow graph already. You just 
> count `StepNode`s with no `BodyInvocationAction`. (cf. 
> `StepStartNode.isBody`)


Could work, I'd have to take a closer look to confirm.  

Otherwise noted some good points and technical corrections (bear in mind I 
was trying to get a lot down quickly) about practical implementation... 
though I think Jesse might be the only one who calls the following "easy":


Easy—patch `CpsFlowExecution.start` to create a proxy `Invoker` that 
> counts the different kinds of calls. This could be included in the 
> current `CpsFlowExecution.PipelineTimings`, actually. 

Worth noting that we would need to make sure that such an implementation is 
*extremely* lightweight in practice because any overhead it adds to CPS 
operations would be felt.  As long as it's direct field access & 
incrementation (or AtomicXX access) that's probably fine.  

On Monday, March 19, 2018 at 6:32:01 PM UTC-4, R Tyler Croy wrote:
>
> (replies inline) 
>
> On Mon, 19 Mar 2018, Samuel Van Oort wrote: 
>
> > Late to the party because I was heads-down on Pipeline bugs a lot of 
> > Friday, but this is a subject near-and-dear to my heart and in the past 
> > I've discussed what metrics might be interesting since this was an 
> explicit 
> > intent to surface from my Bismuth (Pipeline Graph Analysis APIs).  Some 
> of 
> > these are things I'd wanted to make a weekend project of (including 
> > surfacing the existing workflow-cps performance metrics). 
>
>
> Long reply is long! Thanks for taking the time to respond Sam. Suffice it 
> to 
> say, there isn't a world in which I wouldn't use statsd for this :) My 
> current 
> thinking is to incorporate the Metrics plugin 
> (https://plugins.jenkins.io/metrics) in order to provide the appropriate 
> interfaces, and if that's fine, then I would have no qualms with that 
> becoming 
> a dependency of Pipeline itself. I need to do some research on what 
> Dropwizard 
> baggage might be unnecessarily added into Jenkins. 
>
>
> To many of your inline comments, I do not think there's any problem 
> collecting 
> as much telemetry as you and the other Pipeline developers see fit. My 
> list was 
> mostly what *I* think I need to demonstrate success with Pipeline for 
> Jenkins 
> Essentials, and to understand how Jenkins Essentials is being used in 
> order to 
> guide our future roadmap. 
>
>
>
>
> Cheers 
>
>
> > 
> > We should aim to implement Metrics using the existing Metrics interface 
> > because then that can be fairly easy exported in a variety of ways -- I 
> use 
> > a Graphite Metrics reporter that couples to another metric 
> aggregator/store 
> > for the Pipeline Scalability Lab (some may know it as "Hydra").  Other 
> > *cough* proprietary systems may already consume this format of data.  I 
> > would not be surprised if a StatsD reporter is pretty easy to hack 
> together 
> > using https://github.com/ReadyTalk/metrics-statsd and you get a lot of 
> > goodies "for free." 
> > 
> > The one catch for implementing metrics is that we want to be cautious 
> about 
> > adding too much overhead to the execution process. 
> > 
> > As far as specific met

Re: [Essentials] Collecting usage telemetry from Jenkins Pipeline

2018-03-19 Thread Samuel Van Oort
I think Google Groups somehow managed to eat a *very* long and detailed 
reply here, possibly as a result of a message deletion somewhere (or it 
somehow got kicked into the spam filter) *sigh* so I'm going to give just 
the TLDR of it.   Something on this has been on my want-to-have list for a 
long time though, and was planned as a future follow-on to the Pipeline 
Graph Analysis API Project (AKA Bismuth) that I did. 

This makes a lot of sense to me in general.   We'd want to make sure these 
new metrics take advantage of the existing Dropwizard Metrics APIs - it 
should be pretty trivial to create a plugin 
wrapping https://github.com/ReadyTalk/metrics-statsd so that it spits those 
metrics and all of our existing goodies out.   This also lets all the 
existing metrics integrations take advantage. 

I have existing implementations for some of these in an un-hosted plugin 
that's used as a utility for my Pipeline scalability lab.  One thing to 
know is that it's much trickier to track a *step* execution in Pipeline vs. 
looking at the FlowNode(s) it creates - generally I think it's "good 
enough" to do per-Flownode metrics and use the 
StepDescriptor.getFunctionName for the node to determine what Pipeline step 
created it. 

I'd like to break down granularly by step for each Pipeline: how many 
FlowNodes were of that step type, how long each took to run on average, and 
how long that step took (wall time) overall.  This lets us see regressions 
per step type, where averages over all steps in a pipeline are not very 
useful.  It also tells us which steps are used a lot and which aren't -- 
useful for prioritizing. 

Similarly to Liam's comment on cyclomatic complexity, it would be helpful 
to know how many GroovyCPS elements are created within the Pipeline 
(similar to the improvement I did to put an effective Stack Depth limit in) 
-- this helps show real runtime complexity to users. 

On Friday, March 16, 2018 at 4:28:29 PM UTC-4, R Tyler Croy wrote:
>
> The successful adoption and iterative improvement of Jenkins Essentials 
> [0] is 
> heavily contingent on a spectrum of automated feedback which I've been 
> referring to as "telemetry" in many of the design documents. I wanted to 
> start 
> discussing the prospect of collecting anonymized Pipeline usage telemetry 
> to 
> help Jenkins Essentials, Blue Ocean, and Pipeline teams understand how 
> users 
> are actually using Jenkins Pipeline. 
>
> James Dumay has already prototyped some similar work[1] for collecting 
> behavioral telemetry in Blue Ocean, but what I'm proposing would be much 
> more 
> broad in scope[2]. The metrics I am interested in, to help us understand 
> how 
> Pipeline is being used, are: 
>
>  * Counts: 
>* configured Declarative Pipelines 
>* configured Script Pipelines 
>* Pipeline executions 
>* distinct built-in step invocations (i.e. not counting Global Variable 
> invocations) 
>* Global Shared Pipelines configured 
>* Folder-level Shared Pipelines configured 
>* Agents used per-Pipeline 
>* Post-directive breakdown (stable,unstable,changed,etc) 
>
>  * Timers 
>* Runtime duration per step invocation 
>* Runtime duration per Pipeline 
>
>
> I believe this is a sufficiently useful set of metrics to send along, but 
> the 
> two questions I could use help answering are: 
>
>  * What are other metrics which would positively impact the development of 
>Jenkins? 
>  * Are there concerns about implementation feasibility for any of these? 
>
>
> I am planning on using statsd (sent to the project's Datadog account), so 
> sampling and controlling the volume of individual metrics is not something 
> I'm 
> terribly worried about. 
>
>
>
> Happy to hear any feedback y'all are willing to share! 
>
>
> [0] https://github.com/jenkins-infra/evergreen#evergreen 
> [1] https://github.com/jenkinsci/blueocean-plugin/pull/1653 
> [2] https://issues.jenkins-ci.org/browse/JENKINS-49852 
>
>
>
> Cheers 
> - R. Tyler Croy 
>
> -- 
>  Code:  
>   Chatter:  
>  xmpp: rty...@jabber.org  
>
>   % gpg --keyserver keys.gnupg.net --recv-key 1426C7DC3F51E16F 
> -- 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/9659500e-1aa6-4e37-a342-9a4708a79bbf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Essentials] Collecting usage telemetry from Jenkins Pipeline

2018-03-19 Thread Samuel Van Oort
Late to the party because I was heads-down on Pipeline bugs a lot of 
Friday, but this is a subject near-and-dear to my heart and in the past 
I've discussed what metrics might be interesting since this was an explicit 
intent to surface from my Bismuth (Pipeline Graph Analysis APIs).  Some of 
these are things I'd wanted to make a weekend project of (including 
surfacing the existing workflow-cps performance metrics). 

We should aim to implement Metrics using the existing Metrics interface 
because then that can be fairly easy exported in a variety of ways -- I use 
a Graphite Metrics reporter that couples to another metric aggregator/store 
for the Pipeline Scalability Lab (some may know it as "Hydra").  Other 
*cough* proprietary systems may already consume this format of data.  I 
would not be surprised if a StatsD reporter is pretty easy to hack together 
using https://github.com/ReadyTalk/metrics-statsd and you get a lot of 
goodies "for free."

The one catch for implementing metrics is that we want to be cautious about 
adding too much overhead to the execution process. 

As far as specific metrics:

> distinct built-in step invocations (i.e. not counting Global Variable 
invocations)

This can't be measured easily from the flow graph due to the potential to 
create multiple block structures for one step.  It COULD be added easily 
via a registered new StepListener API in workflow-api (and implemented in 
workflow-cps) though.   I think it's valuable.

> configured Declarative Pipelines, configured Script Pipelines 

We can get all Pipelines (flavor-agnostic) by iterating over WorkflowJob 
items.  Not sure how we'd tell Scripted vs. Declarative -- maybe 
registering a Listener extension point of some sort?   I see value here. 

I'd *also* like to have a breakdown of which Pipelines have been run in the 
last, say week and month, by type (easy to do by looking at the most recent 
build).   That way we know not just which were created but which are in 
active use. 

> Pipeline executions

Rates and counts can be achieved with the existing Metrics Timer time.  I'd 
like to see that broken down by Scripted vs. Declarative as well. 

> * Global Shared Pipelines configured 
>   * Folder-level Shared Pipelines configured 

Do you mean Shared Library use?  One metric I'd be interested in is how 
many shared libraries are used *per-pipeline* -- easy to measure from the 
count of LoadedScripts I believe (correct me if there's something I'm 
missing here, Jesse). 

> Agents used per-Pipeline

I think should be possible to do this easily via flow graph analysis, 
looking for WorkspaceActionImpl -- nodes and labels are be available.  We 
might want to count total nodes *uses* (open/close of node blocks) and 
distinct nodes used.  

Best to triggers as a post-build analysis using the RunTrigger -- that way 
it's just a quick iteration over the Pipeline. 

> Runtime duration per step invocation

This is one of the MOST useful metrics I think.

I already have an implementation used in the Scalability Lab that does this 
on a per-flownode basis using the GraphListener (rather than per-step).  
This is part of a small utility plugin for metrics used in the scalability 
lab (not hosted currently since it's not general-use). 

Doing per-step is somewhat more complex - for many steps, trivial, but for 
example for a Retry step there's not a logical way to do it because you get 
multiple blocks.  Blocks in general are undefined - do you count the block 
*contents*, just the start, just the end, or start+end nodes?   Also 
remember that Groovy logic counts against the Step time with the 
FlowNodes.  Usually that shouldn't be a huge issue unless the Groovy is 
complex.  

If that's too noisy there might be ways to insert Listeners for the Step 
itself (more complex though) -- I think using the FlowNodes is good enough 
for now and gives us a solid first-order approximation that is useful 99% 
of the time. 

I would also like to extend this by breaking it down into separate metrics 
per step type, i.e. runtime for sh, runtime for echo, for 'node', etc.  
 This is easier than you'd think since you can fetch the StepDescriptor and 
call getFunctionName to get a unique metric key for the step.   This is far 
more useful to us than just average step timings, because it helps spot 
performance regressions in the field. 

Other aggregates of interest: total time spent in each step type for the 
pipeline and counts of the FlowNode by step per pipeline.  This will show 
if we're spending (for example) a LOT of time running 
readFile/writeFile/dir steps due to some sudden bottleneck in the remoting 
interaction and also reveal which step types are used most often.   Knowing 
which steps are used heavily helps me know which deserve extra priority for 
bugfixes, features, and optimizations.

It actually *sounds* far more complicated than it really is -- this would 
be a pretty trivial afternoon project I think. 

> Runtime duration 

Re: Subject: Pipeline Storage Performance Work Available For Beta - Want to kick the tires?

2018-01-03 Thread Samuel Van Oort
Thank you for reporting this -- turns out to be a very simple issue.  I've 
pushed out a tested fix -- if you upgrade to the workflow-api plugin beta-2 
version, this is now resolved. 

I'm curious to hear how it's performing for people that have tried it out.

On Wednesday, December 27, 2017 at 4:07:44 AM UTC-5, Ted Xiao wrote:
>
> I changed it to PERFORMANCE_OPTIMIZED, but it changed back to none after 
> restart. xml is 
>
> more org.jenkinsci.plugins.workflow.flow.GlobalDefaultFlowDurabilityLevel.
> xml
> 
>  DescriptorImpl plugin="workflow-api@2.25-durability-beta">
>   PERFORMANCE_OPTIMIZED
>  DescriptorImpl>
>
>
>
>
> On Friday, December 22, 2017 at 9:51:14 AM UTC+8, Samuel Van Oort wrote:
>>
>> Subject: Pipeline Storage Performance Work Available For Beta - Want to 
>> kick the tires?
>>
>> Hey all, I've just released a set of plugin betas to the Experimental 
>> Update Center.  They have enhancements to Pipeline which CAN dramatically 
>> reduce I/O use and improve performance.  Please given them a try and report 
>> back how they work out for you.
>>
>> Please note: to maintain existing behavior, these changes are OPT-IN. You 
>> MUST enable them to see a difference (see below).
>>
>> The settings themselves have their own explanations (tooltips and help 
>> info), but the below gives more info.
>>
>> *Will it help me?*
>> * Yes, if you are running complex Pipelines or Pipelines with many steps.
>> * Yes, if your Jenkins instance uses NFS, magnetic storage, runs many 
>> Pipelines at once, or shows high iowait.
>> * No, if your Pipelines spend almost all their time waiting for 
>> shell/batch scripts to run.  This isn't a magic "go fast" button for 
>> everything (I wish!).
>> * No, if you are not using Pipelines, or your system is loaded down by 
>> other factors.
>>
>> *How do I get it?*
>> * You need to be on Jenkins LTS 2.73+ or higher (or a weekly 2.62+)
>> * Enable the experimental update center - instructions here: 
>> https://jenkins.io/blog/2013/09/23/experimental-plugins-update-center/
>> * Check for plugins updates
>> * You should see and install updates for the following plugins, with 
>> versions including the word "durability"
>> - Pipeline: API (workflow-api)
>> - Pipeline: Groovy (workflow-cps)
>> - Pipeline: Job (workflow-job)
>> - Pipeline: Supporting APIs (workflow-support)
>> - Pipeline: Multibranch (workflow-multibranch)
>> * Restart the master to use the updated plugins - note: you need all of 
>> them to take advantage.
>>
>> *What does it do?*
>>
>> This adds a performance/durability setting for Pipelines.  If you use the 
>> performance-optimized mode, disk writes are reduced significantly. This 
>> lets you improve Pipeline performance greatly (reduce I/O) at some cost to 
>> the running Pipelines' ability to survive if Jenkins falls over completely 
>> (durability).  Stability of Jenkins ITSELF is not changed, nor are there 
>> changes to completed Pipelines.
>>
>> We also add the ability to mark Pipelines to NOT resume upon restart (a 
>> requested feature) - available under the properties at the top.
>>
>> *How do I USE it?*
>>
>> Durability settings need to be enabled (and will display in the logs when 
>> a job begins), either globally or per Pipeline/branch (MultiBranch). 
>> Settings take effect the next time the Pipeline runs. 
>>
>> There are 3 ways to configure the durability setting:
>>
>> **Globally**, you can choose a durability setting under "Manage Jenkins > 
>> Configure System", labelled "Pipeline Speed/Durability Settings".  These 
>> settings will take effect for Pipelines upon the next run, unless you 
>> override them with one of the below settings
>>
>> **Per pipeline job:** at the top of the job configuration, labelled 
>> "Custom Pipeline Speed/Durability Level" - this overrides the global 
>> setting.  Or, use a "properties" step - the setting will apply to the NEXT 
>> run after the step is executed (same result).
>>
>> **Per branch for a multibranch project:** configure a custom Branch 
>> Property Strategy (under the SCM) and add a property for Custom Pipeline 
>> Speed/Durability Level.  This overrides the global setting. 
>>
>>
>> *What are the settings?*
>>
>> * Performance optimized mode ("PERFORMANCE_OPTIMIZED") - Greatly reduces 
>> disk I/O but running Pipelines with lower durability settings may lose 
>> runtim

Subject: Pipeline Storage Performance Work Available For Beta - Want to kick the tires?

2017-12-21 Thread Samuel Van Oort
Subject: Pipeline Storage Performance Work Available For Beta - Want to 
kick the tires?

Hey all, I've just released a set of plugin betas to the Experimental 
Update Center.  They have enhancements to Pipeline which CAN dramatically 
reduce I/O use and improve performance.  Please given them a try and report 
back how they work out for you.

Please note: to maintain existing behavior, these changes are OPT-IN. You 
MUST enable them to see a difference (see below).

The settings themselves have their own explanations (tooltips and help 
info), but the below gives more info.

*Will it help me?*
* Yes, if you are running complex Pipelines or Pipelines with many steps.
* Yes, if your Jenkins instance uses NFS, magnetic storage, runs many 
Pipelines at once, or shows high iowait.
* No, if your Pipelines spend almost all their time waiting for shell/batch 
scripts to run.  This isn't a magic "go fast" button for everything (I 
wish!).
* No, if you are not using Pipelines, or your system is loaded down by 
other factors.

*How do I get it?*
* You need to be on Jenkins LTS 2.73+ or higher (or a weekly 2.62+)
* Enable the experimental update center - instructions here: 
https://jenkins.io/blog/2013/09/23/experimental-plugins-update-center/
* Check for plugins updates
* You should see and install updates for the following plugins, with 
versions including the word "durability"
- Pipeline: API (workflow-api)
- Pipeline: Groovy (workflow-cps)
- Pipeline: Job (workflow-job)
- Pipeline: Supporting APIs (workflow-support)
- Pipeline: Multibranch (workflow-multibranch)
* Restart the master to use the updated plugins - note: you need all of 
them to take advantage.

*What does it do?*

This adds a performance/durability setting for Pipelines.  If you use the 
performance-optimized mode, disk writes are reduced significantly. This 
lets you improve Pipeline performance greatly (reduce I/O) at some cost to 
the running Pipelines' ability to survive if Jenkins falls over completely 
(durability).  Stability of Jenkins ITSELF is not changed, nor are there 
changes to completed Pipelines.

We also add the ability to mark Pipelines to NOT resume upon restart (a 
requested feature) - available under the properties at the top.

*How do I USE it?*

Durability settings need to be enabled (and will display in the logs when a 
job begins), either globally or per Pipeline/branch (MultiBranch). Settings 
take effect the next time the Pipeline runs. 

There are 3 ways to configure the durability setting:

**Globally**, you can choose a durability setting under "Manage Jenkins > 
Configure System", labelled "Pipeline Speed/Durability Settings".  These 
settings will take effect for Pipelines upon the next run, unless you 
override them with one of the below settings

**Per pipeline job:** at the top of the job configuration, labelled "Custom 
Pipeline Speed/Durability Level" - this overrides the global setting.  Or, 
use a "properties" step - the setting will apply to the NEXT run after the 
step is executed (same result).

**Per branch for a multibranch project:** configure a custom Branch 
Property Strategy (under the SCM) and add a property for Custom Pipeline 
Speed/Durability Level.  This overrides the global setting. 


*What are the settings?*

* Performance optimized mode ("PERFORMANCE_OPTIMIZED") - Greatly reduces 
disk I/O but running Pipelines with lower durability settings may lose 
runtime data IF they do not finish AND Jenkins is not shut down 
gracefully.  If this happens, they behave like FreeStyle builds (logs, but 
no steps to visualize). Details at bottom.

* Maximum durability ("MAX_SURVIVABILITY") - behaves just like Pipeline did 
before, slowest option.  Use this for running your most critical Pipelines.

* Less durable, a bit faster ("SURVIVABLE_NONATOMIC") - Writes data with 
every step but avoids atomic writes. On some filesytems, especially 
networked ones (i.e. NFS), this is faster than maximum durability mode, but 
it carries a small extra risk (details at bottom).


*Nitty-gritty details*

Remember: worst-case behavior reverts to something like FreeStyle builds -- 
Pipelines that cannot persist data may not be able to resume or displayed 
in Blue Ocean/Stage View/etc, but will have logs.

Running pipelines with the performance-optimized setting may lose data IF 
they do not finish AND Jenkins is not shut down gracefully. A "graceful" 
shutdown is where Jenkins goes through a full shutdown process, such as 
visiting http://[jenkins-server]/exit or using one of the gentler signals 
to kill the process.  A "dirty" shutdown is where the Jenkins process dies 
without doing shutdown tasks -- killing a Docker container or using "kill 
-9" to terminate the Java process will do this. 

The less-durable/a bit faster setting avoids atomic writes -- what this 
means is that if the Operating System fails, data that is buffered for 
writing to disk will not be flushed and will be lost.  This is quite 

Re: [PROPOSAL] Change core logging from Java Utils Logging

2017-11-29 Thread Samuel Van Oort
> Am I the only one to think it strange that we are discussing using 
very limited core developer resources to replace a library used 
pervasively throughout the Jenkins code base with a functionally 
identical library with no demonstrable benefit to users and rather 
marginal impact on developers

No, you're not -- I'd also like to see a case where profiling shows simple 
log formatting is a significant bottleneck.  If it wasn't an impactful 
change I'd say "sure, micro-optimize" but after seeing applications broken 
due to SLF4J API changes it seems like a risky change which is only 
beneficial in edge cases (high logging throughput). 

Now, I *have* seen bottlenecks around AnnotatedLargeText and console 
annotation -- particularly in pipeline, although the logging there is 
getting a rework anyway that will reduce the impact of that.  

If we want to optimize logging performance,* I think it makes more sense to 
target the build logging APIs for optimization, specifically the I/O bits *-- 
there's some room to optimize the way we use & interact with the streams, 
and use more modern NIO APIs.  Would also like to see reuse of buffers, 
because this is a *key cause* of high GC pressure from Jenkins if you're 
touching small log files, and one of the main reasons we see hundreds of 
MB/s of object garbage generated (continuously reallocating byte arrays 
that could be reused).  Remoting is another cause of GC pressure (depending 
on the protocol used).

On Tuesday, November 28, 2017 at 10:35:54 AM UTC-5, Jesse Glick wrote:
>
> On Mon, Nov 27, 2017 at 7:56 PM, Stephen Connolly 
>  wrote: 
> > any other user provided handler could [also queue unformatted log 
> records] 
>
> Theoretically. I cannot think of any such case. Handlers are known to 
> be registered by Jenkins core; the `support-core` plugin; and the 
> servlet container. 
>
> Am I the only one to think it strange that we are discussing using 
> very limited core developer resources to replace a library used 
> pervasively throughout the Jenkins code base with a functionally 
> identical library with no demonstrable benefit to users and rather 
> marginal impact on developers? 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/e29c3d2f-85fe-43ad-a0df-2b96fd9e6fe1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Altering a plugin - Jenkins Stage view - where to look for in the code

2017-03-17 Thread Samuel Van Oort
Dear Martin,
I am the Stage View maintainer -- there's not an extension point for this, 
but if you want to add one, you'd have to make a new API like this one:

https://github.com/jenkinsci/pipeline-stage-view-plugin/blob/master/rest-api/src/main/java/com/cloudbees/workflow/rest/endpoints/JobAPI.java#L72

and also create a version of this that adds filter conditions:

https://github.com/jenkinsci/pipeline-stage-view-plugin/blob/master/rest-api/src/main/java/com/cloudbees/workflow/rest/external/JobExt.java#L121

and then you'd need to surface this somewhere in the UI when making the 
initially HTTP requests (in the UI plugin) and change the API call it 
invokes (a variety of places).

Regards,
Sam

On Friday, March 17, 2017 at 2:40:58 AM UTC-4, Martin Holeček wrote:
>
> Dear all,
>
> please, which part of the pipeline stage view plugin's sources would be 
> most suitable for an insertion of a filter?
>
> What I need to accomplish - I do want to see historical builds that have a 
> specific parameter equal a specific value for one specific job. 
> I do not want to filter jobs, like every answer on stackoverflow assumes, 
> I want to see the nice stageview but filtered... and since there seems to 
> not be such a functionality, I need to try to create one. Yea, like 
> filtering the historical builds in the left column, but doing this for the 
> stageview. 
>
> I was not able to locate such a point in the sources where a filter could 
> be inserted.
>
> If You would  think, that this solution is not wise, if maybe exporting 
> the build history and doing select over csv file would serve better, tell 
> me ;)
>
> Thank You very much!
> Martin
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/f64ad6d5-5504-4f1a-a6b1-7c0d9926e2ce%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Does someone maintain parameterized trigger plugin?

2017-01-30 Thread Samuel Van Oort
Yes, I've been spread too thin to do a proper job of maintaining along with 
other commitments, unfortunately -- would be happy for someone else to take 
it up if they've got the time to focus on it more.

On Monday, January 30, 2017 at 2:49:59 PM UTC-5, Oleg Nenashev wrote:
>
> IIRC Sam Van Oort and Kristin Whetstone were the last formal maintainers 
> of the plugin. But it does not seem they still have time/interest for its 
> maintenance. They are in Cc.
>
> BTW, I am pretty fine if anyone takes REAL ownership of the plugin
>
> пятница, 27 января 2017 г., 18:35:33 UTC+1 пользователь ogondza написал:
>>
>> On 2017-01-27 16:04, Jesse Glick wrote: 
>> > On Thu, Jan 26, 2017 at 6:02 AM,   wrote: 
>> >> Does someone maintain this plugin? 
>> > 
>> > If Oliver does not, then I guess it is yours for the taking… 
>> > 
>>
>> I do not intend to. I have done a couple of releases since it was 
>> unmaintained. Please, go for it. 
>>
>> -- 
>> oliver 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/dd74bffb-5921-4af3-8446-b4301dd0bf0f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Proposal: Out-of-order weekly release? Just another attempt

2016-11-07 Thread Samuel Van Oort
+1 - needs an out of order release.  This causes 2 major breakages.   

A small aspect of JENKINS-39404 is responsible for JENKINS-39555, which 
badly breaks the whole pipeline plugin suite (and goodness knows what 
else). 

I have a fix for that which works, but am ensuring it doesn't have side 
effects.

On Monday, November 7, 2016 at 1:11:19 PM UTC-5, Oleg Nenashev wrote:
>
> Hi,
>
> According to my previous experience 
> , I 
> assume that nobody in this list cares about the Weekly releases. And you're 
> doing right, LTS is the only way to get enough stability. But I want to try 
> again and to propose out-of-order release (2.30). 
>
> *Why?*
>
>- Many issues opened to JENKINS-39414 
>, blocker for any 
>instances having Ruby Runtime
>- LTS baseline selection is coming
>   - Starting from 2.26 weekly releases are screwed up by regressions
>   - If we do not resolve the fallout, we may have to select 2.25 as 
>   the new LTS baseline
>   - In such case major improvements like JNLP4 won't get into the 
>   release. 
>   - According to Vivek, it's also going to impact the anticipated 
>   BlueOcean public release
>
> *What to fix?*
>
>- JENKINS-39414  - 
>has not been completely fixed in 2.29
>   - Stapler needs to be fixed again, PR #2622 
>   
>- JENKINS-39465  - 
>Critical bug, which blocks the JNLP4 adoption
>   - PR #2621  has two 
>   +1s
>- Not important: Several other minor bugfixes, which could be 
>integrated by tomorrow
>
> *When?*
>
>- I propose to do it tomorrow (Nov 8)
>
>
> What do you think about it?
>
>
> Best regards,
> Oleg
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/b55f703d-68f3-4fdb-b99f-5fed5f87ce80%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Proposal: Out-of-order weekly release? Just another attempt

2016-11-07 Thread Samuel Van Oort
I'm looking into JENKINS-39555 now -- for some reason the usual JIRA 
notification wasn't delivered.  

On Monday, November 7, 2016 at 1:11:19 PM UTC-5, Oleg Nenashev wrote:
>
> Hi,
>
> According to my previous experience 
> , I 
> assume that nobody in this list cares about the Weekly releases. And you're 
> doing right, LTS is the only way to get enough stability. But I want to try 
> again and to propose out-of-order release (2.30). 
>
> *Why?*
>
>- Many issues opened to JENKINS-39414 
>, blocker for any 
>instances having Ruby Runtime
>- LTS baseline selection is coming
>   - Starting from 2.26 weekly releases are screwed up by regressions
>   - If we do not resolve the fallout, we may have to select 2.25 as 
>   the new LTS baseline
>   - In such case major improvements like JNLP4 won't get into the 
>   release. 
>   - According to Vivek, it's also going to impact the anticipated 
>   BlueOcean public release
>
> *What to fix?*
>
>- JENKINS-39414  - 
>has not been completely fixed in 2.29
>   - Stapler needs to be fixed again, PR #2622 
>   
>- JENKINS-39465  - 
>Critical bug, which blocks the JNLP4 adoption
>   - PR #2621  has two 
>   +1s
>- Not important: Several other minor bugfixes, which could be 
>integrated by tomorrow
>
> *When?*
>
>- I propose to do it tomorrow (Nov 8)
>
>
> What do you think about it?
>
>
> Best regards,
> Oleg
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/b035390c-3a4e-48f9-9b33-08e2e3e8864d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Setting CMS or G1 as the default GC algorithm for Jenkins?

2016-10-14 Thread Samuel Van Oort
For interested parties, this is tracked 
in https://issues.jenkins-ci.org/browse/JENKINS-38802

On Tuesday, October 11, 2016 at 12:57:49 PM UTC-4, Samuel Van Oort wrote:
>
> Baptiste, I completely agree -- GC logs are a gimme for any production 
> setting IMO (and since it's in the JIRA, it won't be missed).
>
> As a point of reference, I'd reached out to my friends (and former 
> colleagues) at a multi-billion dollar company known for open source, and 
> CMS is still the preference there for production app servers.  It kept 
> request times stable on the business-critical services layer I owned, and 
> prevented errors/hangups due to timeouts from GC pauses.
>
> On Saturday, October 8, 2016 at 5:06:16 AM UTC-4, Baptiste Mathus wrote:
>>
>> +1. On the mid-size cluster I worked on in the past we did that quite a 
>> long time ago and it did improve the user experience. Clearly less pauses.
>>
>> (As said on Jira, while doing so I would definitely take the opportunity 
>> to also set up gc logs enabled by default (with rotation). There's no 
>> noticeable impact on perf even under high load, and this is invaluable 
>> information when things go wrong.)
>>
>> Le 7 oct. 2016 9:45 PM, "Samuel Van Oort" <svan...@cloudbees.com> a 
>> écrit :
>>
>>> Kanstanstin, 
>>>
>>> > suggest collect GC monitoring 
>>>
>>> I'm glad you asked! I actually spent yesterday digging into problems 
>>> with two high-load, large Jenkins instances which were experiencing issues, 
>>> and for which we had GC logs available.  In both cases, we observed 
>>> multi-second pause-the-world full GC cycles with the default parallel GC -- 
>>> please find attached a screenshot from one of the GC analyses.
>>>
>>> > G1 as default for java9... depends on how much patches and differences 
>>> are between java8 and java9 and especially oracle vs openjdk.
>>>
>>> Indeed.  It's pretty contentious for Java 9 as well.
>>>
>>> The benchmarks I've seen for G1 on JDK 7 have been consistently poor, so 
>>> best to avoid it.
>>>
>>> On Java 8, where G1 has gotten a lot of optimizations over time, it 
>>> seems to trade back and forth with CMS if you care about pause time, 
>>> depending on situation:
>>> http://eivindw.github.io/2016/01/08/comparing-gc-collectors.html
>>> http://blog.dripstat.com/java-g1-gc-review/
>>>
>>> CMS is harder to tune, and falls back to longer stop-the-world pauses 
>>> eventually in some cases, but tends to chew a bit less CPU.  
>>> G1 seems to have an advantage at >4 GB heaps based on benchmarks and 
>>> tuning advice I've seen (though I haven't run it in a production setting 
>>> yet). 
>>>
>>> I think for *most* users running CI/CD systems with generally-spiky 
>>> load, generally very low CPU load (or at least some unloaded cores), and 
>>> multicore systems with 2-8 GB heap *probably* G1 is the best overall 
>>> solution.  CMS has some advantages if your system tends to run highly 
>>> loaded (better GC throughput). 
>>>
>>> Maybe you've got experiences to contribute though? 
>>>
>>> @jglick:
>>>  > if the user is selecting a newer JRE than you expected, it would be 
>>> better to let the JRE’s own defaults apply. 
>>>
>>> Point.  My expectation would be: java 7 --> CMS, Java 8 --> probably G1 
>>> (maybe CMS), Java 9+ --> default for platform. 
>>>
>>> On Friday, October 7, 2016 at 1:27:55 PM UTC-4, Kanstantsin Shautsou 
>>> wrote:
>>>>
>>>> Applying theory to practise could be not proved without facts. I would 
>>>> suggest collect GC monitoring from jenkins on jenkins (or whatever it 
>>>> called now) and compare. Then it will be absolutely clear what is better.
>>>>
>>>> G1 as default for java9... depends on how much patches and differences 
>>>> are between java8 and java9 and especially oracle vs openjdk.
>>>>
>>>> On Friday, October 7, 2016 at 12:29:56 AM UTC+3, Samuel Van Oort wrote:
>>>>>
>>>>> Hi guys, 
>>>>> I'd like to propose that we explicitly set the Java args for the 
>>>>> Jenkins packages to use either Concurrent Mark Sweep or G1 as the default 
>>>>> GC algorithm. 
>>>>>
>>>>> The reason for this is that Jenkins is generally used as a 
>>>>> long-running server process, and large stop-the-world GC pauses can pose 
>>>

Re: [DISCUSS] Time for Jenkins to require Java 8 to run

2016-10-14 Thread Samuel Van Oort
I will jump for joy when everything is on Java 8.  Functional features or 
death!  Being able to consume the powerful libraries that are now Java 
8-only (replacing Guava caches with Caffeine, for example, for a ~3x speed 
boost on all caching). 

Also @daniel-beck, this is another argument behind getting the current JVM 
stats (which I wanted for assessing default GC settings)

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/fbe9e3b7-f915-460b-ac65-a5588e69dbdd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Setting CMS or G1 as the default GC algorithm for Jenkins?

2016-10-11 Thread Samuel Van Oort
Baptiste, I completely agree -- GC logs are a gimme for any production 
setting IMO (and since it's in the JIRA, it won't be missed).

As a point of reference, I'd reached out to my friends (and former 
colleagues) at a multi-billion dollar company known for open source, and 
CMS is still the preference there for production app servers.  It kept 
request times stable on the business-critical services layer I owned, and 
prevented errors/hangups due to timeouts from GC pauses.

On Saturday, October 8, 2016 at 5:06:16 AM UTC-4, Baptiste Mathus wrote:
>
> +1. On the mid-size cluster I worked on in the past we did that quite a 
> long time ago and it did improve the user experience. Clearly less pauses.
>
> (As said on Jira, while doing so I would definitely take the opportunity 
> to also set up gc logs enabled by default (with rotation). There's no 
> noticeable impact on perf even under high load, and this is invaluable 
> information when things go wrong.)
>
> Le 7 oct. 2016 9:45 PM, "Samuel Van Oort" <svan...@cloudbees.com 
> > a écrit :
>
>> Kanstanstin, 
>>
>> > suggest collect GC monitoring 
>>
>> I'm glad you asked! I actually spent yesterday digging into problems with 
>> two high-load, large Jenkins instances which were experiencing issues, and 
>> for which we had GC logs available.  In both cases, we observed 
>> multi-second pause-the-world full GC cycles with the default parallel GC -- 
>> please find attached a screenshot from one of the GC analyses.
>>
>> > G1 as default for java9... depends on how much patches and differences 
>> are between java8 and java9 and especially oracle vs openjdk.
>>
>> Indeed.  It's pretty contentious for Java 9 as well.
>>
>> The benchmarks I've seen for G1 on JDK 7 have been consistently poor, so 
>> best to avoid it.
>>
>> On Java 8, where G1 has gotten a lot of optimizations over time, it seems 
>> to trade back and forth with CMS if you care about pause time, depending on 
>> situation:
>> http://eivindw.github.io/2016/01/08/comparing-gc-collectors.html
>> http://blog.dripstat.com/java-g1-gc-review/
>>
>> CMS is harder to tune, and falls back to longer stop-the-world pauses 
>> eventually in some cases, but tends to chew a bit less CPU.  
>> G1 seems to have an advantage at >4 GB heaps based on benchmarks and 
>> tuning advice I've seen (though I haven't run it in a production setting 
>> yet). 
>>
>> I think for *most* users running CI/CD systems with generally-spiky load, 
>> generally very low CPU load (or at least some unloaded cores), and 
>> multicore systems with 2-8 GB heap *probably* G1 is the best overall 
>> solution.  CMS has some advantages if your system tends to run highly 
>> loaded (better GC throughput). 
>>
>> Maybe you've got experiences to contribute though? 
>>
>> @jglick:
>>  > if the user is selecting a newer JRE than you expected, it would be 
>> better to let the JRE’s own defaults apply. 
>>
>> Point.  My expectation would be: java 7 --> CMS, Java 8 --> probably G1 
>> (maybe CMS), Java 9+ --> default for platform. 
>>
>> On Friday, October 7, 2016 at 1:27:55 PM UTC-4, Kanstantsin Shautsou 
>> wrote:
>>>
>>> Applying theory to practise could be not proved without facts. I would 
>>> suggest collect GC monitoring from jenkins on jenkins (or whatever it 
>>> called now) and compare. Then it will be absolutely clear what is better.
>>>
>>> G1 as default for java9... depends on how much patches and differences 
>>> are between java8 and java9 and especially oracle vs openjdk.
>>>
>>> On Friday, October 7, 2016 at 12:29:56 AM UTC+3, Samuel Van Oort wrote:
>>>>
>>>> Hi guys, 
>>>> I'd like to propose that we explicitly set the Java args for the 
>>>> Jenkins packages to use either Concurrent Mark Sweep or G1 as the default 
>>>> GC algorithm. 
>>>>
>>>> The reason for this is that Jenkins is generally used as a long-running 
>>>> server process, and large stop-the-world GC pauses can pose a problem for 
>>>> stability and user experience.   The default Java GC algorithm 
>>>> (ParallelGC) 
>>>> is tuned to optimize throughput at the expense of potentially multi-second 
>>>> major GC pauses with large heaps, which is obviously not a good fit for 
>>>> this case. This is why Oracle is moving to G1 as the default as of Java 9.
>>>>
>>>> I would suggest either CMS or G1 make good defaults for normal Jenkins 
>>>> users, beca

Re: Can I get on the cert team?

2016-10-11 Thread Samuel Van Oort
Adding to the train, if I would be able to contribute.

Github, IRC, and jenkins.io username: svanoort
Email: samvanoort at gmail

And yes, of course I have an ICLA and 2FA on for Github.

On Tuesday, October 11, 2016 at 11:15:11 AM UTC-4, Baptiste Mathus wrote:
>
>
> 2016-10-10 15:17 GMT+02:00 Daniel Beck :
>
>>
>> > On 10.10.2016, at 14:30, Andrew Bayer > > wrote:
>> >
>> > GitHub username - abayer
>> > jenkins.io username - abayer
>> > email: andrew...@gmail.com 
>> > IRC: abater
>>
>> As security officer, I approve this addition to the team. We can always 
>> use more help! (hint to all others reading this :-) )
>>
>
> OK, so I'll take that sentence as an invitation to steal that thread :-). 
> Could I get on the team too? 
> Hoping to be helpful to you and the team :)
>
> GitHub username - batmat
> jenkins.io username - batmat
> email: batmat at batmat punkt net
> IRC: batmat
>
> 2FA enabled & iCLA submitted already.
>
> Thanks
>
> -- Baptiste
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/53837df5-e370-4018-8b74-340a56bf9771%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Setting CMS or G1 as the default GC algorithm for Jenkins?

2016-10-07 Thread Samuel Van Oort
Kanstanstin, 

> suggest collect GC monitoring 

I'm glad you asked! I actually spent yesterday digging into problems with 
two high-load, large Jenkins instances which were experiencing issues, and 
for which we had GC logs available.  In both cases, we observed 
multi-second pause-the-world full GC cycles with the default parallel GC -- 
please find attached a screenshot from one of the GC analyses.

> G1 as default for java9... depends on how much patches and differences 
are between java8 and java9 and especially oracle vs openjdk.

Indeed.  It's pretty contentious for Java 9 as well.

The benchmarks I've seen for G1 on JDK 7 have been consistently poor, so 
best to avoid it.

On Java 8, where G1 has gotten a lot of optimizations over time, it seems 
to trade back and forth with CMS if you care about pause time, depending on 
situation:
http://eivindw.github.io/2016/01/08/comparing-gc-collectors.html
http://blog.dripstat.com/java-g1-gc-review/

CMS is harder to tune, and falls back to longer stop-the-world pauses 
eventually in some cases, but tends to chew a bit less CPU.  
G1 seems to have an advantage at >4 GB heaps based on benchmarks and tuning 
advice I've seen (though I haven't run it in a production setting yet). 

I think for *most* users running CI/CD systems with generally-spiky load, 
generally very low CPU load (or at least some unloaded cores), and 
multicore systems with 2-8 GB heap *probably* G1 is the best overall 
solution.  CMS has some advantages if your system tends to run highly 
loaded (better GC throughput). 

Maybe you've got experiences to contribute though? 

@jglick:
 > if the user is selecting a newer JRE than you expected, it would be 
better to let the JRE’s own defaults apply. 

Point.  My expectation would be: java 7 --> CMS, Java 8 --> probably G1 
(maybe CMS), Java 9+ --> default for platform. 

On Friday, October 7, 2016 at 1:27:55 PM UTC-4, Kanstantsin Shautsou wrote:
>
> Applying theory to practise could be not proved without facts. I would 
> suggest collect GC monitoring from jenkins on jenkins (or whatever it 
> called now) and compare. Then it will be absolutely clear what is better.
>
> G1 as default for java9... depends on how much patches and differences are 
> between java8 and java9 and especially oracle vs openjdk.
>
> On Friday, October 7, 2016 at 12:29:56 AM UTC+3, Samuel Van Oort wrote:
>>
>> Hi guys, 
>> I'd like to propose that we explicitly set the Java args for the Jenkins 
>> packages to use either Concurrent Mark Sweep or G1 as the default GC 
>> algorithm. 
>>
>> The reason for this is that Jenkins is generally used as a long-running 
>> server process, and large stop-the-world GC pauses can pose a problem for 
>> stability and user experience.   The default Java GC algorithm (ParallelGC) 
>> is tuned to optimize throughput at the expense of potentially multi-second 
>> major GC pauses with large heaps, which is obviously not a good fit for 
>> this case. This is why Oracle is moving to G1 as the default as of Java 9.
>>
>> I would suggest either CMS or G1 make good defaults for normal Jenkins 
>> users, because they greatly reduce average and maximum GC pause times.
>>
>> Thoughts?
>>
>> Regards,
>> Sam Van Oort
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/11b2a288-119f-4f8c-a87a-b0ce13b8aa26%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Setting CMS or G1 as the default GC algorithm for Jenkins?

2016-10-06 Thread Samuel Van Oort
Hi guys, 
I'd like to propose that we explicitly set the Java args for the Jenkins 
packages to use either Concurrent Mark Sweep or G1 as the default GC 
algorithm. 

The reason for this is that Jenkins is generally used as a long-running 
server process, and large stop-the-world GC pauses can pose a problem for 
stability and user experience.   The default Java GC algorithm (ParallelGC) 
is tuned to optimize throughput at the expense of potentially multi-second 
major GC pauses with large heaps, which is obviously not a good fit for 
this case. This is why Oracle is moving to G1 as the default as of Java 9.

I would suggest either CMS or G1 make good defaults for normal Jenkins 
users, because they greatly reduce average and maximum GC pause times.

Thoughts?

Regards,
Sam Van Oort

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/c009a299-1b23-4cb8-97ef-c6718fc279e1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Docker workflow runs with user not fully configured

2016-09-07 Thread Samuel Van Oort

 

> It is not “known” to me, the plugin maintainer, so as I wrote in PR 
> 57: if there is a concrete, small, easily reproducible test case that 
> demonstrates something that does not work which you feel should, 
> please file it. 
>

To clarify: I'm not saying that this is a result of a *problem* with how 
docker.inside is implemented, just that there is a bad interaction between 
the fairly reasonable implementation and what packaging needs to work fully 
(more than normal).  Nobody is trying to hide bugs from you or shoot holes 
in the docker-workflow plugin, there are just some cases where 
docker.inside isn't a good mapping to what is needed so another approach is 
best used, and this is one.

FWIW there's no guaranteed, comprehensive way to create and fully configure 
a user at runtime across all containers. When needed that is best handled 
within the container builds (or with helper scripts if you need to control 
the created user at runtime).  Containers may contain stripped-down 
distributions without the normal user management facilities (or with very 
feature-limited versions if core utilities are not included).  At this 
point I don't assume that *anything* exists and works in a base image until 
I see it run successfully.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/17f23bfe-4719-43c0-a51a-f1fdb5e9bab1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Docker workflow runs with user not fully configured

2016-09-07 Thread Samuel Van Oort
Oliver, 
Apologies for the delayed response, but this is a known issue around how 
docker.inside is implemented.  It was very painful when setting up the 
docker image. 

This is why the docs and workflow script for the docker image don't use 
docker.inside 
-- https://github.com/jenkinsci/packaging/blob/master/docker/README.md
When using the packaging image in Jenkins builds, we have to invoke Docker 
manually (or run with arguments) rather than using the inside command. 

The sudo-able packaging test images use a different approach, by applying 
templating to Docker files and rebuilding so you can always have a 
correctly permissioned user. 

After some tinkering, I hit upon a much better way to do this, by 
dynamically creating a user within the container on the fly (which is also 
granted sudo permissions within the container).   There's a PR out for 
this: https://github.com/jenkinsci/packaging/pull/54

It's still pending review from someone though, but it's been tested out and 
works (was actually used for something small).  If it gets the community 
nod, we could merge and have a more robust system (the approach you use 
requires having a local user created). 

Hope that helps clarify somewhat!

Sam




On Friday, September 2, 2016 at 6:06:08 AM UTC-4, ogondza wrote:
>
> I am tying to get jenkinsci/packaging to work from Pipeline using 
> docker-workflow-plugin: 
>
> ``` 
>  builder = docker.build 'jenkins-packaging-builder:0.1' 
>  builder.inside { 
>  sh "make rpm ..." 
>  } 
> ``` 
>
> All seems well except that rpmbuild comaplins at a couple of places: no 
> $HOME is defined, the uid/gid for generated files does not exist in the 
> system. 
>
> The problem is the pluign instructs docker to "use" user with uid/gid 
> matching the outer user (so filesystem can be shared) but the user 
> account does not exist inside. It seems that docker "creates" the user 
> to come extend but it is not fully configured causing tools to fail in 
> weird ways. 
>
> I guess I can switch to use the container as root somehow or update the 
> image to create some user for this purpose but this seems like an 
> inherent struggle with the approach. What is the best practice here? 
>
> In case it is relevant I am using docker 1.10.3 and 
> docker-workflow-plugin 1.6. 
>
> Thanks 
> -- 
> oliver 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/232b7b90-bbb5-4f57-85ee-b8892833f10b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Easy path for the latest Jenkins in Ubuntu

2016-06-16 Thread Samuel Van Oort
I took an initial look at this as a maintainer for Jenkins Linux packaging 
-- I find the snap format itself rather interesting.  I am curious to see 
where it goes now that there's support for non-Ubuntu distros released from 
Canonical.  I like that it makes Jenkins more visible and discoverable for 
users, though.

The bad news: right now, I don't think this packaging is at the level of 
maturity (I've called out a couple points in the PR) that it would need to 
be for inclusion as a main distribution for Jenkins.  There are also some 
infrastructure issues with building on Ubuntu 16.04 in the main Jenkins 
packaging project (potentially solvable by docker use but a bit annoying). 

The good news: it would be a great candidate for a similar distribution 
path to the Docker image, where it initially is built up independently, and 
then once it hits an appropriate level of maturity, gets official inclusion 
and a download link on Jenkins.io.  I'd really like to see this packaging 
option get fleshed out further and hit that point, because I think there's 
some exciting potential. 

On Friday, June 10, 2016 at 7:16:15 PM UTC-4, Evan Dandrea wrote:
>
> On Fri, 10 Jun 2016 at 14:25 Jesse Glick  > wrote:
>
>> On Fri, Jun 10, 2016 at 12:31 AM, Evan Dandrea
>>  wrote:
>> > Let me know what you think about getting the snapcraft.yaml file in your
>> > repo so you can control exactly what gets built.
>>
>> A PR to https://github.com/jenkinsci/packaging sounds like a good idea.
>
>
> Cheers. I've put one together:
> https://github.com/jenkinsci/packaging/pull/57
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/5801578a-8e32-45ea-9c54-9f8df1f9f9e8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Pipeline View Plugin

2016-03-25 Thread Samuel Van Oort
Sergei, I love the visualization this provides of the pipeline DAG!

One thing worth mentioning:  I have been working on a 
scalable/high-performance & parallel-aware FlowAnalyzer for pipeline stage 
view that is generalized enough to assist with the data you are 
visualizing.   Since the rest-api portion of stage view is available as a 
separate plugin from UI, I might suggest consuming that API or its 
extensible Analyzer algorithm to power this view.  

Advantages: don't have to roll your own API, get a variety of metrics on 
each FlowNode (pause/execution timing, status, executor used) and stage. 
 The algorithm also scales to complex flows with thousands of nodes and 
implements caching for executed runs.  Finally it supports limiting the 
number of FlowNodes returned for a stage, so one can render a subset where 
a stage contains a loop (or deeply nested blocks) returning thousands of 
nodes. 

Thoughts?

On Monday, March 21, 2016 at 2:52:04 AM UTC-4, Sergei Egorov wrote:
>
> Hey everyone, 
>
> Yesterday I decided to publish sources for my pipeline visualization 
> plugin: https://github.com/bsideup/jenkins-pipeline-view
>
> 
>
>
> What's cool about it? 
> First of all, it *can handle any graph* provided by pipeline because it 
> uses very powerful *JS graph library* to position steps. 
> Also, it's a *React* application, written in *ES7* (not even ES6!), with 
> ImmutableJS and RxJS inside - so hipsters will be happy :D But in fact, it 
> means that it's damn easy to develop this plugin and provide more 
> functionality. 
> I use *Webpack* to bundle everything (JS, CSS, fonts, images, SVG icons) 
> in one single pipe.js file. *No Jenkins JS Modules*, no conflicts, no 
> impact on others. Even CSS will not conflict because of *CSS-modules* ( 
> http://glenmaddern.com/articles/css-modules )
> All icons are SVG ones and look good on any screen, retina or not, and any 
> zoom level.
>
> I use Jackson on the backend side to serialize FlowNodes and their 
> actions. *Why Jackson?* Because it was much easier to implement 
> serialization of selected (non-exposed) fields and class info included 
> compared to Stapler. I saw something were done about classinfo in Stapler, 
> but at the moment of creation of this plugin, it wasn't delivered to 
> Jenkins core yet. Also, *almost none of the Pipeline actions are 
> @Expose-d*.
>
> I use *gradle-jpi-plugin* instead of Maven because it's much easier to 
> describe some complex build process with Gradle, especially when frontend 
> build is involved. In fact, it's just one line:
>
> https://github.com/bsideup/jenkins-pipeline-view/blob/954b895b6574cdf34815ff94a4a8db3ad3811aeb/build.gradle#L61
>
>
> Future development
> Will I continue to develop it? Definitely! Here, *at ZeroTurnaround*, we 
> use Jenkins a lot, and eventually we will migrate to the pipelines, and 
> proper visualization of the process is a must for us. And feel free to 
> contribute as well, it's really a good chance to learn *modern JS stack* as 
> well :)
>
>
>
> Best regards,
> Sergei Egorov
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/9b0c2cbd-e6ec-4ae0-9b51-956d92f80b1f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Jenkins 2.0 initial setup plugin selection

2016-03-10 Thread Samuel Van Oort


On Thursday, March 10, 2016 at 10:58:38 AM UTC-5, Daniel Beck wrote:
>
>
> On 10.03.2016, at 16:17, Samuel Van Oort <svan...@cloudbees.com 
> > wrote: 
> > 
> > Embeddable build status plugin would be a strong candidate for 
> inclusion.  I'm not sure what category would be best, but "Organization and 
> Administration" and "Build Features" both make some sense. 
>
> Noted. FWIW I see this more in the former category. 'Build Features' is a 
> hack anyway and I would love to see some other suggestions…  


> > Distributed builds is feeling very thin so far (as you've mentioned), 
> perhaps some of the docker stuff: 
> > 
> > - docker-plugin (fills the distributed *and* docker categories, woo) 
> > - docker-workflow plugin?  (the docker pipeline).  Or possibly under 
> pipelines? 
>
> Are both of them strong enough if we don't consider the category issue? 
> Worst case we'll need to rethink those so it makes sense for the plugins we 
> suggest to new users. 
>

Having worked with it extensively at this point, the docker pipeline is 
*definitely* strong enough; the warts are mostly around the initial docker 
configuration.

I mention the docker-plugin because it is quite popular (doubled 
installation in the last year), and has guides for using it out: 

https://developer.jboss.org/people/pgier/blog/2014/06/30/on-demand-jenkins-slaves-using-docker?_sscc=t
https://wiki.bitnami.com/Applications/BitNami_Jenkins/Configure_Jenkins_container_slaves

Presently I'm using the docker pipeline for similar use cases (and then 
some), but it seems like a logical choice for many users.  

Or at least there should be another option out of the docker options to 
suggest for distributed builds in docker containers (whether it's this, 
docker-slaves, or docker custom build environment, etc etc).  Since this is 
a common use cast, it would be quite helpful to have a specific option for 
users.
 

>
> - 
>
> > - Would it make sense to put parameterized trigger in the initial 
> recommended plugins? 
> > - Is gradle plugin use strong enough to make it part of the core 
> recommended plugins? 
> > - Junit & checkstyle would probably make sense in the top recommended 
> plugins list 
>
> I guess we're starting the conversation on the initial plugin selection 
> now? :-) 
>

Did I stick a foot in it? :-)  The general recommendations there seem 
overall quite sensible though, but perhaps I'll un-stick my foot since this 
may be a tad contentious.
 

>
> A reminder: 
>
> > For the recommended plugins, I had to improvise, as these have not been 
> discussed yet. Despite this I decided to provide a "real" selection as 
> we've received quite confused/negative feedback on the previous list 
> contents, and didn't want that repeated with the recommended plugins list. 
>
> Your suggestions look sensible. FWIW I mainly added Gradle (and Ant) to 
> not have that category empty (and failed to do the same for build 
> analysis…). Maven plugin is out of the list, and Maven build step is a core 
> feature, so the list lacks wrt those. 
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/4e86e07b-35ae-4623-9f48-06d6ca243e82%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Jenkins 2.0 initial setup plugin selection

2016-03-10 Thread Samuel Van Oort
The list looks generally really solid (as do the recommendations). 

Looking at it, a couple comments:

- Would it make sense to put parameterized trigger in the initial 
recommended plugins?
- Is gradle plugin use strong enough to make it part of the core 
recommended plugins?
- Junit & checkstyle would probably make sense in the top recommended 
plugins list

Embeddable build status plugin would be a *strong* candidate for inclusion. 
 I'm not sure what category would be best, but "Organization and 
Administration" and "Build Features" both make some sense. 

Distributed builds is feeling very thin so far (as you've mentioned), 
perhaps some of the docker stuff:

- docker-plugin (fills the distributed *and* docker categories, woo)
- docker-workflow plugin?  (the docker pipeline).  Or possibly under 
pipelines?

On Wednesday, March 9, 2016 at 7:01:05 PM UTC-5, Daniel Beck wrote:
>
>
> On 23.02.2016, at 23:20, Daniel Beck  
> wrote: 
>
> > Hi everyone, 
> > 
> > I added two additional criteria to the wiki: The plugin shouldn't 
> require a restart to work (otherwise it's not exactly a great getting 
> started experience), and it shouldn't be just a UI/theming plugins that 
> replaces part of the Jenkins UI (as including them in a list of 
> suggested/recommended plugins would just be weird). 
> > 
> > Based on these criteria I tried to build a list of plugins to feature in 
> the initial setup wizard. 
> > 
> > A reminder: All plugins are still available in the plugin manager. But 
> with 1100+ plugins now, it'd be great if we could provide a curated subset 
> to get new users started. This also means that particularly complex 
> plugins, that require knowledge of Jenkins internals, even though they may 
> be useful, are basically excluded by default. 
> > 
> > To approximate popularity, I used the previous install count multiplied 
> by the growth in the previous year (as neither alone is a sufficient 
> indicator of popularity). Then I sorted by this value, and went through the 
> list, being mildly judgmental. This is the result: 
> > 
> > 
> https://docs.google.com/spreadsheets/d/1TKziW5oEfX9NYAPrZkz-wXPt5s-zG-znIB5-Jwt3tmw/edit#gid=0
>  
> > 
> > This is the list of plugins I came up with for the initial setup dialog. 
> If a plugin wouldn't be used, the reason is stated. Regarding 'Unsafe': 
> Wearing my security officer hat, I consider every plugin with Groovy 
> scripting that is not using the Groovy Sandbox to be unsafe, even if it 
> implements security measures. 
> > 
> > Please review the list, and let me know what you think. 
>
> Hi everyone, 
>
> based on all the feedback in this thread, I updated the spreadsheet linked 
> above, and opened a PR based on that: 
>
> https://github.com/jenkinsci/jenkins/pull/2111 
>
> The PR description explains in detail what I did, and why. I plan to 
> include this selection in the next 2.0 preview. 
>
> I'd be happy to incorporate further feedback. Please provide it in this 
> thread, and not the PR, to not fragment this conversation. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/4fe3a3db-b44e-45ae-aad0-3df96677463e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Jenkins 2.0 initial setup plugin selection

2016-03-03 Thread Samuel Van Oort
Would it make sense to add pipeline-stage-view plugin as a candidate for 
inclusion alongside the pipeline DSL?  
It provides greatly improved out-of-box experience for new users of 
pipeline. 

https://wiki.jenkins-ci.org/display/JENKINS/Pipeline+Stage+View+Plugin

As far as the wiki criteria go:

* While it was recently released to the full public, it had 60+ watchers 
just on the release issue (JENKINS-31154) and social media activity around 
it, plus people watching the source repo even before content landed, so we 
can safely say it is of general interest
* It integrates well with Jenkins, of course, since it was part of a tested 
codebase
* Not just discoverable, features are fairly understandable out-of-box and 
ease the path for a new user to pipeline (as well as helping people scaling 
up in jenkins and moving into full CD use)
* Hosted on JenkinsCI, yes
* Maintained: yes, quite.  *coughs.*

On Thursday, January 21, 2016 at 1:56:15 PM UTC-5, Daniel Beck wrote:
>
> Hi everyone, 
>
> For Jenkins 2.0, we're aiming for a much better 'getting started' 
> experience, aimed for users new to Jenkins. 
>
> We already talked about this last year[1], but the discussion didn't 
> really have a great result, so I'd like to start over by defining a list of 
> guidelines on what kind of plugins qualify for inclusion first. My initial 
> proposal is in the wiki[2]. Please take a look and provide feedback in this 
> thread. 
>
> Specific selection of plugins would be the next step. 
>
> Daniel 
>
> 1: 
> https://groups.google.com/forum/#!msg/jenkinsci-dev/w-_18aYn4QQ/t_WT442bBwAJ 
> 2: 
> https://wiki.jenkins-ci.org/display/JENKINS/Plugin+Selection+for+the+Setup+Dialog
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/f5777d5a-cf6e-42af-971b-651883391c0c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Automatic build and deploy of plugin (CI style)

2015-12-30 Thread Samuel Van Oort
Hi,

As a side note, if you're building out demo machines with a precanned 
config, there are some really helpful scripts in the Jenkins docker image 
repo.
Even if you're not running docker, the scripts and techniques should work 
well on demo VMs.

To preinstall a list of plugins from the update center, listed in a text 
file:
https://github.com/jenkinsci/docker/blob/master/plugins.sh

To land a precanned jenkins home (which can be preloaded with your plugin 
HPI) and use it to start up:
https://github.com/jenkinsci/docker/blob/master/jenkins.sh

A couple examples of how these scripts were used are here:
https://github.com/jenkinsci/workflow-plugin/tree/master/demo
https://github.com/svanoort/gerrit-workflow-demo

-- Sam

On Wednesday, December 23, 2015 at 2:39:41 PM UTC-5, Gavin Mogan wrote:
>
> Hey everyone
>
> I can't decide if this belongs on the user list or the dev list but trying 
> here first.
>
> Has anyone come up with a way (maven or jenkins) to automatically deploy a 
> plugin to a jenkins instance? I want to setup a nice demo server that 
> always has the latest version of our plugin, but don't want to go through 
> the steps of having to manually goto the plugins page, then upload, then 
> wait, then restart, etc.
>
> I can probably script something with the cli tool, but don't want to 
> duplicate work.
>
> Gavin
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/2999e030-21b9-4c7e-9f39-cf4843e29cd3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Automatic build and deploy of plugin (CI style)

2015-12-30 Thread Samuel Van Oort
Hi,

As a side note, if you're building out demo machines with a precanned 
config, there are some really helpful scripts in the Jenkins docker image 
repo.
Even if you're not running docker, the scripts and techniques should work 
well on demo VMs.

To preinstall a list of plugins from the update center, listed in a text 
file:
https://github.com/jenkinsci/docker/blob/master/plugins.sh

To land a precanned jenkins home (which can be preloaded with your plugin 
HPI) and use it to start up:
https://github.com/jenkinsci/docker/blob/master/jenkins.sh

A couple examples of how these scripts were used are here:
https://github.com/jenkinsci/workflow-plugin/tree/master/demo
https://github.com/svanoort/gerrit-workflow-demo

-- Sam

On Wednesday, December 23, 2015 at 2:39:41 PM UTC-5, Gavin Mogan wrote:
>
> Hey everyone
>
> I can't decide if this belongs on the user list or the dev list but trying 
> here first.
>
> Has anyone come up with a way (maven or jenkins) to automatically deploy a 
> plugin to a jenkins instance? I want to setup a nice demo server that 
> always has the latest version of our plugin, but don't want to go through 
> the steps of having to manually goto the plugins page, then upload, then 
> wait, then restart, etc.
>
> I can probably script something with the cli tool, but don't want to 
> duplicate work.
>
> Gavin
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/250ec724-a21b-4962-838f-fb35a0218a56%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Request for merge/release of fix for JENKINS-26050 (workflow compatibility for parameterized trigger)

2015-08-10 Thread Samuel Van Oort
Yes, I'd talked to KK on Friday in standup, oliver gondza via IRC this 
morning. 

Both had indicated go ahead. 
 

On Monday, August 10, 2015 at 10:15:59 AM UTC-4, Oleg Nenashev wrote:

 The PR has been merged without waiting for a feedback for 1 week. sam, did 
 you contact the plugin's owners?

 пятница, 7 августа 2015 г., 4:23:53 UTC+2 пользователь Samuel Van Oort 
 написал:

 Hi, 
 We have a PR to fix JENKINS-26050, adding support for firing downstream 
 workflow projects to the parameterized trigger plugin. 
 We know that at least some users are specifically waiting for this to be 
 available. 

 The PR is 
 https://github.com/jenkinsci/parameterized-trigger-plugin/pull/87

 I'm wondering if someone might be able assist merging/releasing this? 
  (it looks like the active maintainers may be other than listed on the 
 wiki, based on commits, and haven't had word from one). 

 Thank you!
 Sam Van Oort



-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/1a5ec378-382e-45ff-890f-ce0eb5b50ca5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Request for merge/release of fix for JENKINS-26050 (workflow compatibility for parameterized trigger)

2015-08-06 Thread Samuel Van Oort
Hi, 
We have a PR to fix JENKINS-26050, adding support for firing downstream 
workflow projects to the parameterized trigger plugin. 
We know that at least some users are specifically waiting for this to be 
available. 

The PR is https://github.com/jenkinsci/parameterized-trigger-plugin/pull/87

I'm wondering if someone might be able assist merging/releasing this?  (it 
looks like the active maintainers may be other than listed on the wiki, 
based on commits, and haven't had word from one). 

Thank you!
Sam Van Oort

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Developers group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-dev/30792447-3e6a-42d8-971c-fc6bb1a97c07%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.