Set/change Branch description in multibranch pipelines

2017-07-10 Thread Michael Lasevich
I would like to set "description" field in automatically generated branches 
in Multibranch Pipeline to add some information like links to latest 
artifacts, links to latest generated documentation, etc. While I would 
settle for one standard default description for all the branches, ideally 
it would be nice to be able to set it from Jenkinsfile - similar to how 
other branch job properties can be set via "properties" command.  As far as 
I can tell, you cannot currently do this via "properties" - but I could be 
wrong. Is there a way to do this using just the pipeline code in 
Jenkinsfile?

I have tried getting the item from Jenkins directly to manipulate it, but 
sandboxing (rightfully) blocked me from doing that. I suppose I could write 
a shared lib function to do that, which would bypass the sandboxing, but I 
was wondering if there is a more proper way of doing this? I can't imagine 
I am the only one who wants to do that.

Thanks, 

-M

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/1e2137fe-8142-4db7-8a7e-1c2dfe0ba090%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Versioning the entrie system configuration -- not artifacts

2016-11-07 Thread Michael Lasevich
I did not mean to start a religious war - like I said, this is only an 
opinion. I only offered that Pipelines are a bit newer - and while CPS is a 
bear, it does appear, at least to me, to simplify things drastically - 
allowing much simpler setups and avoiding situations where it "gets 
confused and ends up making my system unusable". Perhaps this is the answer 
to original poster's current frustrations. Or maybe not. 

-M


On Monday, November 7, 2016 at 2:23:12 PM UTC-8, Victor Martinez wrote:
>
> Both of them are good but they have different approaches... Although, IMO, 
> Pipeline is still an incubating feature atm.
> I wouldn't say Pipelines are better or worst or even obsolete or modern. 
> If you go for configuration as code, by definition, code should be 
> testable, and I could go further by saying testable locally and 
> automatically, and unfortunately I haven't not seen that feature yet with 
> the Pipelines.
>
> Cheers
>
> On Monday, 7 November 2016 20:53:35 UTC, Michael Lasevich wrote:
>>
>> Ahh, "Job DSL",  I remember that. It was a good thing when it was the 
>> only game in town, but (in my opinion) Pipelines pretty much made it 
>> obsolete. Of course it is a matter of opinion, but if you are finding Jobs 
>> DSL too complicated, Pipelines may be just right for you - it removes a lot 
>> of the complexity, and makes your entire build process far simpler - you no 
>> longer need a rabbit-warren of jobs, and with MultiBranch Pipelines + 
>> Global Libs + something like Slack notifications, your devs may not even 
>> need to login to Jenkins server - just commit code and see notification 
>> that the job was created(if needed) and build was complete :-) Join the 
>> modern age :-)
>>
>> -M
>>
>> On Monday, November 7, 2016 at 11:55:20 AM UTC-8, Victor Martinez wrote:
>>>
>>> Give a try job-dsl-plugin 
>>> - https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
>>>
>>> Supports 1000+ jenkins plugins, local testing, gradle integration, same 
>>> Jenkins job paradigm, DRY concept and a bunch of other benefits besides of 
>>> converting jobs in code and therefore scm oriented.
>>>
>>> Cheers
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/372afae4-6fb7-4b7a-9ccc-5308d9e0f629%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Any way to run code in Pipeline Global Library from Scriptler?

2016-11-07 Thread Michael Lasevich
The Functionality in Scriptler and Global Libraries can and does often 
intersect. I would like to avoid repeating same code in multiple places and 
I was wondering if there is any way to call one from another?

The specific use-case I am looking at is using active parameters in a job 
that execute some code to generate a selection/value, but allowing 
pipelines to run same code directly.

Ideally i would want to call Pipeline GL code from Scriptler, but vice 
versa would do in a pinch - I would really hate to write same code in both 
locations.

Thanks, 

-M

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/082e86ed-8af7-4248-b37f-9994913ebef9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Versioning the entrie system configuration -- not artifacts

2016-11-07 Thread Michael Lasevich
Ahh, "Job DSL",  I remember that. It was a good thing when it was the only 
game in town, but (in my opinion) Pipelines pretty much made it obsolete. 
Of course it is a matter of opinion, but if you are finding Jobs DSL too 
complicated, Pipelines may be just right for you - it removes a lot of the 
complexity, and makes your entire build process far simpler - you no longer 
need a rabbit-warren of jobs, and with MultiBranch Pipelines + Global Libs 
+ something like Slack notifications, your devs may not even need to login 
to Jenkins server - just commit code and see notification that the job was 
created(if needed) and build was complete :-) Join the modern age :-)

-M

On Monday, November 7, 2016 at 11:55:20 AM UTC-8, Victor Martinez wrote:
>
> Give a try job-dsl-plugin 
> - https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
>
> Supports 1000+ jenkins plugins, local testing, gradle integration, same 
> Jenkins job paradigm, DRY concept and a bunch of other benefits besides of 
> converting jobs in code and therefore scm oriented.
>
> Cheers
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/5200fb48-2be3-4f51-a26f-08681272a80c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Versioning the entrie system configuration -- not artifacts

2016-11-07 Thread Michael Lasevich
Not familiar with "Jobs SCM" and unclear as to what you are trying to do - 
or how Redis or Docker fits it here. If you have some idea, you are welcome 
to write a plugin and make it work however you want, which is why the 
plugin system exists - and answers your question as to why this is done 
with plugin. But onestly, it sounds like you are making it far more 
complicated than it needs to be. Generally the setup would be a single, 
static Jenkins Master server that manages job scheduling and coordination 
and a number of "slave" nodes" that execute the job. Ideally slave nodes 
would be pristine and disposable and not have any code/tools installed on 
them - with master managing any code/tools requires - meaning they do not 
ever need to be versioned on the slave. 

Modern way of managing your jobs is via Pipelines. You just store your jobs 
along with your code (I assume your code is in some sort of a SCM to begin 
with) and for shared libraries use global library repo(s).  Jenkins does 
not really have a lot of configuration on Jenkins Master side after this, 
the only thing you really need here is a decent backup strategy - but if 
you are really interested in tracking individual job changes, there is a 
"Job Configuration History" plugin that will track each change and who did 
it, but with most of the job coming with source code, it is not all that 
critical/necessary.

HTH,

-M

On Monday, November 7, 2016 at 4:45:57 AM UTC-8, Rinaldo DiGiorgio wrote:
>
> Hi,
>
>   I tried to version my jobs with the Jobs SCM plugin for example and it 
> often gets confused and ends up making my system unusable. Perhaps an 
> entire rewrite is needed and the backend store needs to move to something 
> like redis. I  don't think version control should be done with optional 
> plugins. It should be part of the core system and all configuration data 
> should be in a network store.  
>
>   I can see a solution using docker where one makes a base image.  What 
> happens when you change the configuration however. Do you make a new docker 
> base image?  Some organizations want all the source code to generate an 
> image in some type of SCM and this is the issue.  If you change job 
> configurations or config params you just incurred the hit of a new image 
> generation cycle.  Perhaps I am not looking it in the right way or in the 
> future when everything is pipeline the configuration is pipeline with 
> supporting json and property files.
>
> Rinald
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/9547956d-bf32-4827-ab38-bbaf41b971f5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Writing integration tests for Global Libraries

2016-11-07 Thread Michael Lasevich
There is no reason why you cannot automate the local running of the Jenkins 
tests. See how plugin development works - they have setup to start a local 
Jenkins instance with plugin installed, and it should be relatively simple 
to automate that to run test jobs and check for output. 

That said, you are WAY overthinking this. A much simpler solution is to 
store your library in a proper SCM and use the @Library's version support 
to allow a job to specify the branch of the repo. Create a pre-production 
branch and set up a validation job to trigger on commit of code to preprod 
to test the pre-production version of the library. Validation job can 
execute various tests against the pre-prod branch (which, if you desire, 
can include executing of other jobs) and if everything passes, either 
auto-merge the commit tested to production, or signal you that it is safe 
to merge that commit. No additional infrastructure is required, use the 
same Jenkins server you are already using with this library.

The tricky part would be figuring out how to write proper validation tests.

-M




On Sunday, November 6, 2016 at 5:30:16 PM UTC-8, Michael Kobit wrote:
>
> I'm working on writing some global libraries using 
> https://github.com/jenkinsci/workflow-cps-global-lib-plugin. My current 
> process for iterating locally is:
>
>- Start up a local Jenkins instance
>- Point the *globalLibs* directory to my global libraries repository
>- Create a few jobs that use my global libraries
>- Write some code
>- Run jobs
>- Check for results in logs
>
> This is inefficient and error prone.
>
> The workflow doesn't seem much better when using the recent @Library 
>  
> support.
>
> My question is, what do I need to do to get some of the Jenkins 
> infrastructure in place to write automated, integration tests for Jenkins 
> pipelines jobs and Global libraries code?
>
> I am using Gradle, and was hoping to see if anybody else has been 
> successful in setting up testing locally so I can validate that I my 
> libraries work in an actual Jenkins pipeline execution with all of the 
> Groovy CPS transformations and other nuances of writing Groovy libraries 
> for Jenkins pipelines.
>
> My current *build.gradle* looks something like this, but I still haven't 
> gotten it working:
>
> plugins {
>   id 'build-dashboard'
>   id 'groovy'
> }
> description = 'Libraries written for use with Jenkins Global Pipeline 
> Libraries'
>
> repositories {
>   jcenter()
>   maven {
> url 'http://repo.jenkins-ci.org/public'
>   }
>   mavenLocal()
> }
>
> sourceSets {
>   main {
> groovy {
>   // Jenkins Global Workflow Libraries requires sources to be at 'src'
>   srcDirs = ['src']
> }
> java {
>   srcDirs = []
> }
> resources {
>   srcDirs = []
> }
>   }
>   test {
> groovy {
>   // configure the test source set so that it is not part of the Global 
> Pipeline Libraries
>   srcDirs = ['unitTest']
> }
> java {
>   srcDirs = []
> }
> resources {
>   srcDirs = []
> }
>   }
>   jenkinsIntegrationTest {
> groovy {
>   srcDirs = ['jenkinsIntegrationTest']
> }
> java {
>   srcDirs = []
> }
> resources {
>   srcDirs = []
> }
> compileClasspath += sourceSets.main.runtimeClasspath
> runtimeClasspath += sourceSets.main.runtimeClasspath
>   }
> }
>
> configurations {
>   jenkinsIntegrationTestCompile.extendsFrom testCompile
>   jenkinsIntegrationTestRuntime.extendsFrom testRuntime
> }
>
> tasks.create('jenkinsIntegrationTest', Test) {
>   group = LifecycleBasePlugin.VERIFICATION_GROUP
>   description = 'Runs tests against of actual Jenkins Pipelines'
>   testClassesDir = sourceSets.jenkinsIntegrationTest.output.classesDir
>   classpath = sourceSets.jenkinsIntegrationTest.runtimeClasspath
> }
>
> dependencies {
>   compile 'org.codehaus.groovy:groovy-all:2.4.7'
>
>   testCompile 'org.spockframework:spock-core:1.0-groovy-2.4'
>   testCompile 'junit:junit:4.12'
>
>   jenkinsIntegrationTestCompile 'org.jenkins-ci.main:jenkins-core:2.17'
>   jenkinsIntegrationTestCompile 
> 'org.jenkins-ci.main:jenkins-test-harness:2.17'
>   jenkinsIntegrationTestCompile 
> 'org.jenkins-ci.main:jenkins-war:2.17:war-for-test@jar'
>   jenkinsIntegrationTestCompile 'org.jenkins-ci.plugins:git:3.0.0:tests'
>   jenkinsIntegrationTestCompile 
> 'org.jenkins-ci.plugins.workflow:workflow-cps-global-lib:2.4'
>   jenkinsIntegrationTestCompile 
> 'org.jenkins-ci.plugins.workflow:workflow-support:1.15:tests'
>   jenkinsIntegrationTestCompile 
> 'org.jenkins-ci.plugins.workflow:workflow-job:2.6'
> }
>
> // Make sure only the Groovy dependency is available.
> // Other dependencies must be used with @Grab in the defined classes due to 
> how Jenkins Global Libraries work
> project.afterEvaluate {
>   final compile = it.configurations.compile.dependencies
>   if 

Re: Pipeline node scheduling options

2016-10-28 Thread Michael Lasevich


On Friday, October 28, 2016 at 9:36:49 PM UTC-7, John Calsbeek wrote:
>
>
> Shared storage is a potential option, yes, but the tasks in question are 
> currently not very fault-tolerant when it comes to network hitches.
>

Well, it would pay to make them more fault-tolerant :-) But even if you do 
not fix the process, you do not have to run it from the shared storage, 
just use it as storage. Using a node-local mirror, you can rsync it from 
shared storage, run the task, then rsync it back - assuming your data does 
not change much (which I understand is not always the case) - you will soon 
have a relatively recent rsync copy on every node - reducing amount of data 
moving. May or may not work in your case, but something to consider.
 

>  
>  
>
>> But more to the point, if your main issue is that you are worried that a 
>> node may be unavailable, you may consider some automatic node allocation. I 
>> am not sure if there are other examples, but for example the AWS node 
>> allocation can automatically allocate a new node if no threads are 
>> available for a label. That may be a decent backup strategy. If you are not 
>> using AWS - you can probably look if there is another node provisioning 
>> plugin that fits or if not, look at how they do that and write your own 
>> plugin to do it
>>
>
> Assuming that we have a fixed amount of computing resources, does this 
> have any advantage over writing a LoadBalancer plugin?
>

If you are allocating your nodes instead of pre-creating, you do not have 
to have a big shared pool, instead specific nodes are allocated with same 
label only as needed, and as old nodes that died are decommissioned, they 
can re-join the pool of available resources. Of course if you feed the 
affinity requirement, just using them all as a pool is probably easier.

 
>
>> But maybe I am overthinking it. In the end, if your primary concern is 
>> that node may be down - remember that pipeline is groovy code - groovy code 
>> that has access to the Jenkins API/internals. You can write some code that 
>> will check the state of the slaves and select a label to use before you 
>> even get to the node() statement. Sure, that will not fix the issue of a 
>> node going down in a middle of a job, but may catch the job before it 
>> assigns a task to a dead node.
>>
>
> Ah, that's an interesting idea. Something that I forgot to mention in the 
> original post is that if there was a node() function that allocates with a 
> timeout, that would also be a building block that we could use to fix this 
> problem. (If attempting to allocate a specific node fails with a timeout, 
> then schedule on a fallback. timeout() doesn't work because that would 
> apply the timeout to the task as well, not merely to the attempt to 
> allocate the node.) We could indeed query the status of nodes directly. I 
> have a niggling doubt that it would be possible to do this without a race 
> condition (what if the node goes down between querying its status and 
> scheduling on it?), but it's definitely something worth investigating.
>

I am wondering if you can do some weird combination of  parallel + sleep + 
failFast  + try/catch to emulate a timeout for a specific task

>  
>
>> Alternatively, you can simply write another job, in lieu of a plugin, 
>> that will scan all your tasks and nodes and if it detects a node down and a 
>> task waiting for it, assign the label to another node from the "standby" 
>> pool
>>
>
> This is an idea that we had considered, yeah, although I was considering 
> it as a first step in the pipeline before scheduling, which made me nervous 
> about race conditions. But if, as you suggest, it was a frequently run job 
> which is always attempting to set up node allocations… that could 
> definitely work. Good suggestion, thanks!
>  
>
Throw enough things against a wall, something will stick ;-)  Glad to be of 
help.

Good luck.

 -M

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/997916ea-597c-4853-9cf0-8946c81e5c1c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Pipeline node scheduling options

2016-10-28 Thread Michael Lasevich
Is there are way to reduce the need for tasks to run on same slave? I 
suspect the issue is having data from the last run - if that is the case, 
is there any shared storage solution that may reduce the time difference? 
If you can reduce the need for binding tasks to specific nodes, you bypass 
your entire headache.

As for your other approaches, 

A minor point is that you may consider that since a node can have multiple 
labels, your nodes can have individual labels AND a shared label - meaning 
your fallback can be shared among all the existing nodes.

But more to the point, if your main issue is that you are worried that a 
node may be unavailable, you may consider some automatic node allocation. I 
am not sure if there are other examples, but for example the AWS node 
allocation can automatically allocate a new node if no threads are 
available for a label. That may be a decent backup strategy. If you are not 
using AWS - you can probably look if there is another node provisioning 
plugin that fits or if not, look at how they do that and write your own 
plugin to do it

But maybe I am overthinking it. In the end, if your primary concern is that 
node may be down - remember that pipeline is groovy code - groovy code that 
has access to the Jenkins API/internals. You can write some code that will 
check the state of the slaves and select a label to use before you even get 
to the node() statement. Sure, that will not fix the issue of a node going 
down in a middle of a job, but may catch the job before it assigns a task 
to a dead node.

Alternatively, you can simply write another job, in lieu of a plugin, that 
will scan all your tasks and nodes and if it detects a node down and a task 
waiting for it, assign the label to another node from the "standby" pool

I realize all of this sounds hacky. I would really consider that the first 
and foremost task would be to figure out if you can bypass the problem in 
the first place.

-M



On Friday, October 28, 2016 at 8:06:15 PM UTC-7, John Calsbeek wrote:
>
> We have a problem trying to get more control over how the node() decides 
> what node to allocate an executor on. Specifically, we have a situation 
> where we have a pool of nodes with a specific label, all of which are 
> capable of executing a given task, but with a strong preference to run the 
> task on the same node that ran this task before. (Note that these tasks are 
> simply different pieces of code within a single pipeline, running in 
> parallel.) This is what Jenkins does normally, at job granularity, but as 
> JENKINS-36547  says, 
> all tasks scheduled from any given pipeline will be given the same hash, 
> which means that the load balancer has no idea which tasks should be 
> assigned to which node. In our situation, only a single pipeline ever 
> assigns jobs to this pool of nodes.
>
> So far we have worked around the issue by assigning a different label to 
> each and every node in the pool in question, but this has a new issue: if 
> any node in that pool goes down for any reason, the task will not be 
> reassigned to any other node, and the whole pipeline will hang or time out.
>
> We have worked around *that* by assigning each task to "my-node-pool-# || 
> my-node-pool-fallback", where my-node-pool-fallback is a label which 
> contains a few standby nodes, so that if one of the primary nodes goes down 
> the pipeline as a whole can still complete. It will be slower (these tasks 
> can take two to ten times longer when not running on the same node they ran 
> last time), but it will at least complete.
>
> Unfortunately, the label expression doesn't actually mean "first try to 
> schedule on the first node in the OR, then use the second one if the first 
> one is not available." Instead, there will usually be some tasks that 
> schedule on a fallback node even if the node they are "assigned" to is 
> still available. As a result, almost every run of this pipeline ends up 
> taking the worst-case time: it is likely that *some* task will wander 
> away from its assigned node to run on a fallback, which leads the fallback 
> nodes to be over-scheduled and leaves other nodes sitting idle.
>
> The question is: what are our options? One hack we've considered is 
> attempting to game the scheduler by using sleep()s: initially schedule all 
> the fallback nodes with a task that does nothing but sleep(), then schedule 
> all our real tasks (which will now go to their assigned machines whenever 
> possible, because the fallback nodes are busy sleeping), and finally let 
> the sleeps complete so that any tasks which couldn't execute on their 
> assigned machines now execute on the fallbacks. A better solution would 
> probably be to create a LoadBalancer plugin that codifies this somehow: 
> preferentially scheduling tasks only on their assigned label, scheduling on 
> fallbacks only after 30 seconds or a minute.
>
> Is anyone out there 

Re: Best practices for develop/release branch model with pipeline

2016-10-28 Thread Michael Lasevich
There are lots of answers to this and I am not going to pretend I know the 
"right" answer for you, but here are a few things you may want to consider 
when figuring things out:

* In this "pipeline" of jobs you describe, only the initial build has 
access to the source repository - rest use artifacts from other jobs to 
perform the next phase of the process

* Deployment process is typically consistent of two parts - a part that is 
specific to the your environment/organization but not specific to release 
you are deploying and part that is specific to the particular version you 
are deploying. While the line is often blurry, you are much better off not 
mixing them - and keeping the non-deployment specific bits in a separate 
repo. That is your Deployment Job, which is independent of the particular 
version you are deploying. So, changes in org/env specific process are 
pretty much independent of the changes in your project or changes in how to 
deploy your artifacts  - so it makes no sense to version them together and 
store in one repo

* The Global Libraries are a pretty handy place to store common workflows - 
like your global environment/org deployment process. Alternatively you can 
just run an independent repo for that part as it is an independent 
project/codebase

* Deployment artifact is a good place to include release specific 
deployment instructions - there is nothing stopping you from creating an 
additional deployment artifact that contains not only your code build, but 
also instructions on specific steps to deploy this particular version. You 
can even package those instructions as a pipeline script which gets 
executed from your other job. You can even have multiple jobs for different 
phases of the deployment process.

That should hopefully put you on the right path for your needs.

-M

On Friday, October 28, 2016 at 4:02:52 AM UTC-7, Graham Hay wrote:
>
> This is interesting, it's something I've been struggling with while 
> converting our current system over to the new world (I really like the 
> stage view!).
>
> We currently have a "pipeline" of freestyle jobs, that pass artifacts down 
> the line. Build (and test) -> Deploy to stage -> Deploy to prod. The last 
> build uses pinned artifacts ("keep forever"), which allows us to rollback 
> by unpinning the latest build. This is something I'd like to keep.
>
> The one thing I don't understand is where I would define the deploy jobs? 
> Isn't the whole point that everything is now in the Jenkinsfile, and under 
> version control? Also, I don't seem to be able to point my existing 
> freestyle job at the new pipeline build job, to retrieve artifacts. So what 
> am I missing?
>
> Thanks,
>
> Graham
>
> On Wednesday, October 26, 2016 at 4:26:17 PM UTC+1, Michael Lasevich wrote:
>>
>> I am not sure the stages you are talking about are same as what Jenkins 
>> Pipelines calls stages.
>>
>> Jenkins, at its core, is a job server. In Pipelines, a stage is a segment 
>> of a job. Stages of a build job would be something like "Build Binaries" or 
>> "Upload Build Artifacts" - something that is part of one logical job. What 
>> you are talking is a deployment process which is really a separate job from 
>> a build job, and not really a "stage" of build. 
>>
>> So, my approach would be (and is, in some cases):
>>
>> * Set up a Pipeline build for the develop branch 
>> * Make sure the build job archives either deployment artifact(s) or 
>> pointer to them - something that can be used for deployment.
>> * Set up a separate deployment job (can also be Pipeline) that takes in 
>> parameters for a build run and target environment (stage, QA, UA, PreProd, 
>> Production, whatever), and grabs artifacts/pointers from the selected run 
>> and performs a deployment
>>
>> Now, if you want to get fancy, you make that first "build" job a 
>> MultiBranch job that builds both develop and some versions of the feature 
>> branches (I've used /feature/build/* pattern) and then modify the selection 
>> of the job run to select from multiple branches (need to write a Groovy 
>> based Parameter selector for that) - and now you can deploy builds from 
>> feature branches for testing BEFORE they are merged into develop
>>
>> HTH,
>>
>> -M
>>
>>
>>
>>
>>
>>
>>
>> On Wednesday, October 26, 2016 at 4:21:23 AM UTC-7, Sathyakumar 
>> Seshachalam wrote:
>>>
>>> New to Jenkins pipeline.
>>>
>>> My process is that developers work off of develop branch (Feature 
>>> branches and merges of-course).
>>> At any point in time, a release branch is 

Re: [Pipeline Shared Libraries] acme.foo = "5" documented example doesn't work

2016-10-27 Thread Michael Lasevich
You are not doing anything wrong. CPS is just broken in this scenario. You 
cannot have both a field and a getter/setter at the same time with matching 
names - it gets confused and goes into infinite loop. Change 'this.foo' to 
'this._foo' and it will start working (and you can still use "acme.foo" to 
call setter/getter versions. It is vert annoying and cost me days of 
scratching my head until I realized what was going on.

For what its worth, Groovy itself supports this, but something in 
CPS/Pipelines breaks that support.

-M



On Thursday, October 27, 2016 at 5:50:37 AM UTC-7, Rob Oxspring wrote:
>
> Hi,
>
> I've been trying to use a shared library script with properties according 
> to the following but am having no luck.
>
> https://github.com/jenkinsci/workflow-cps-global-lib-plugin/blob/master/README.md#defining-global-variables
>
> To reproduce I've installed a totally clean Jenkins LTS with recommend 
> plugins and added a $JENKINS_HOME/workflow-libs/vars/acme.groovy containing 
>
> def setFoo(v) {
> this.foo = v;
> }
> def getFoo() {
> return this.foo;
> }
> def say(name) {
> echo "Hello world, ${name}"
> }
>
>
> I have a Pipeline job configured with the following script:
>
> echo "myjob"
>
> acme.foo = "5";
> echo acme.foo; // print 5
> acme.say "Joe" // print "Hello world, Joe"
>
>
> And when I run it I get the following result:
>
> [Pipeline] echo
> myjob
> [Pipeline] End of Pipeline
> groovy.lang.MissingPropertyException: No such property: acme for class: 
> groovy.lang.Binding
> at groovy.lang.Binding.getVariable(Binding.java:63)
> at 
> org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:224)
> at org.kohsuke.groovy.sandbox.impl.Checker$4.call(Checker.java:241)
> at 
> org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:238)
> at 
> com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:24)
> at 
> com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
> at WorkflowScript.run(WorkflowScript:3)
> at ___cps.transform___(Native Method)
> at 
> com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.get(PropertyishBlock.java:74)
> at 
> com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30)
> at 
> com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.fixName(PropertyishBlock.java:66)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at 
> com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
> at 
> com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
> at com.cloudbees.groovy.cps.Next.step(Next.java:58)
> at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
> at 
> org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18)
> at 
> org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
> at 
> org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
> at 
> org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
> at 
> org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
> at 
> org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:163)
> at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:324)
> at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:78)
> at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:236)
> at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:224)
> at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:63)
> at java.util.concurrent.FutureTask.run(Unknown Source)
> at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
> at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.run(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Finished: FAILURE
>
>
> I did wonder whether to "def foo" in order to create the property but that 
> just results in an infinite recursion as this.foo = v within setFoo() gets 
> interpreted as a recursive call to setFoo() itself.
>
> Any clues 

Re: Best practices for develop/release branch model with pipeline

2016-10-27 Thread Michael Lasevich
Quick and dirty version of a job lister that takes two parameters "job" 
(multibranch job name) and "old_job" (non-mb job name) and produces output 
that can be used by Active Choices Parameter to present a list of builds to 
select. Value is a | delimited set of values, so you may want to parse it 
something like this:

deploy_info = BuildToDeploy.split('\\|');
deploy = [
version: deploy_info[0],
id: deploy_info[1],
branch: java.net.URLDecoder.decode(deploy_info[2], "UTF-8"),
time: new Date(Long.valueOf(deploy_info[3])),
url: deploy_info[4],
project: deploy_info[5]
]


Here is the script:

import hudson.model.*
import hudson.node_monitors.*
import hudson.slaves.*
import java.util.concurrent.*
import groovy.time.*

now=new Date()

jenkins = Hudson.instance
search_mb=[job]
search_non_mb=[old_job]

full=[:];
builds=[:];

for (item in jenkins.items){
  if (item.name in search_mb){
for (job in item.getAllJobs()){
  for (build in job.getBuilds()){
  full[build.displayName] = [ 
name: build.displayName, 
id: build.id,
branch: job.name, 
job_name: item.name, 
time: build.getTime(), 
ts: build.getTime().getTime(),
url: build.getUrl(),
building: build.building,
failed: (build.result == hudson.model.Result.FAILURE),
project: "${item.name}/${job.name}"

  ];
}
  }

  }else if (item.name in search_non_mb){
job = item

for (build in job.getBuilds()){
  def parameters = build?.actions.find{ it instanceof ParametersAction 
}?.parameters
  branch=job.name
  parameters.each{
if (it.name == "branch"){
  branch = it.value
}
  }
  
  full[build.displayName] = [ 
name: build.displayName, 
id: build.id,
branch: branch, 
job_name: item.name, 
time: build.getTime(), 
ts: build.getTime().getTime(),
url: build.getUrl(),
building: build.building,
failed: (build.result == hudson.model.Result.FAILURE),
project: "${item.name}"

  ];
}
  }
}

full = full.sort { -it.value.ts }

for (build in full){
  item = build.value
  branchDecoded=java.net.URLDecoder.decode(item.branch, "UTF-8");
  TimeDuration duration = TimeCategory.minus(now, item.time)
  if (duration.days >= 7) {
duration=duration.minus(new TimeDuration(duration.hours, 
duration.minutes, duration.seconds, duration.millis))
  } else if (duration.days >= 1) {
duration=duration.minus(new 
TimeDuration(0,duration.minutes,duration.seconds, duration.millis))
  }else if (duration.hours >= 1) {
duration=duration.minus(new TimeDuration(0,0,duration.seconds, 
duration.millis))
  } else {
duration=duration.minus(new TimeDuration(0,0,0, duration.millis))
  }
  timestamp = "${duration} ago"
  
value="${item.name}|${item.id}|${item.branch}|${item.ts}|${item.url}|${item.project}"
  display="${item.name} (${branchDecoded}, ${timestamp})"
  if (item.building){ 
display += " **BUILDING**" 
  } else if (item.failed){
display += " **FAILED**" 
  }
  builds[value] = display
}

return builds




On Thursday, October 27, 2016 at 2:02:42 AM UTC-7, Sathyakumar Seshachalam 
wrote:
>
> Thanks,
>
> > And then modify the selection of the job run to select from multiple 
> branches (need to write a Groovy based Parameter selector for that) - and 
> now you can deploy builds from feature branches for testing BEFORE they are 
> merged into develop
>
> If there are any examples / code snippets on how to do this, will greatly 
> help me. 
>
> On Wed, Oct 26, 2016 at 8:56 PM, Michael Lasevich <mlas...@gmail.com 
> > wrote:
>
>> I am not sure the stages you are talking about are same as what Jenkins 
>> Pipelines calls stages.
>>
>> Jenkins, at its core, is a job server. In Pipelines, a stage is a segment 
>> of a job. Stages of a build job would be something like "Build Binaries" or 
>> "Upload Build Artifacts" - something that is part of one logical job. What 
>> you are talking is a deployment process which is really a separate job from 
>> a build job, and not really a "stage" of build. 
>>
>> So, my approach would be (and is, in some cases):
>>
>> * Set up a Pipeline build for the develop branch 
>> * Make sure the build job archives either deployment artifact(s) or 
>> pointer to them - something that can be used for deployment.
>> * Set up a separate deployment job (can also be Pipeline) that takes in 
>> parameters for a build run and target environment (stage, QA, UA, PreProd, 
>> Production, whatever), and grabs artifacts/pointers fr

Quiet Period in MB Pipelines?

2016-10-26 Thread Michael Lasevich
Is there a way to enable quiet periods via MB Pipeline and properties 
command?

The option is not there in the pipeline syntax codegen and I am wondering 
if there is a reason for that... 

Thanks,

-M

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/cde80896-fe66-483b-a912-2f8530ca0d82%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Constructor or equivalent for "Global Variables" in libraries

2016-10-26 Thread Michael Lasevich
Yeah, I would expect it to work as well, but it simply does not.

It is probably something to do with either the way this is loaded or CPS, 
we have already seen that CPS has hard time handling some basic Groovy 
concepts like getters/setters - I was just hoping I was missing something 
obvious  :-(

-M

On Wednesday, October 26, 2016 at 9:00:40 AM UTC-7, Martina wrote:
>
> In a big standalone groovy I have
>
> class myClass {
> List myList = []
>
> public myCall (item) {
>   myList.add(item)
> }
> }
>
> I am pretty sure I have used it without the class stuff in smaller 
> snippets.
>
> Martina
>
>
>
> On Wednesday, October 26, 2016 at 8:54:59 AM UTC-6, Michael Lasevich wrote:
>>
>> No dice,
>>
>> groovy.lang.MissingPropertyException: No such property: myList for class: 
>> myListTest
>>
>>
>> -M
>>
>> On Wednesday, October 26, 2016 at 7:42:19 AM UTC-7, Martina wrote:
>>>
>>> Try the following:
>>>
>>> myList = []
>>>
>>>
>>> def call(item){
>>>   myList << item
>>> }
>>>
>>> Martina
>>>
>>> On Wednesday, October 26, 2016 at 12:24:48 AM UTC-6, Michael Lasevich 
>>> wrote:
>>>>
>>>> So, what is the proper way to initialize the fields in the "Global 
>>>> Variables" found in the /vars dir in library code?
>>>>
>>>> I know it is supposed to be a singleton instantiated on first call, and 
>>>> I know I can SET new fields by just setting them, but what if I want them 
>>>> to have default value when object is created? Something like:
>>>>
>>>> // vars/myListAdder.groovy
>>>>
>>>> def myList = []
>>>>
>>>>
>>>> def call(item){
>>>>   this.myList << item
>>>> }
>>>>
>>>> I would expect this to work, but it doesn't as it cannot find myList 
>>>> defined
>>>>
>>>> I worked around it for now by using a try/catch to initialize myList on 
>>>> first call, but that seems wrong. There has got to be a proper way to do 
>>>> this, but I am not sure what it is - nothing I tried seems to work
>>>>
>>>> Thanks,
>>>>
>>>> -M
>>>>
>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/b0efbd97-4b9b-4b73-8f07-db3ea8c5d48a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Best practices for develop/release branch model with pipeline

2016-10-26 Thread Michael Lasevich
I am not sure the stages you are talking about are same as what Jenkins 
Pipelines calls stages.

Jenkins, at its core, is a job server. In Pipelines, a stage is a segment 
of a job. Stages of a build job would be something like "Build Binaries" or 
"Upload Build Artifacts" - something that is part of one logical job. What 
you are talking is a deployment process which is really a separate job from 
a build job, and not really a "stage" of build. 

So, my approach would be (and is, in some cases):

* Set up a Pipeline build for the develop branch 
* Make sure the build job archives either deployment artifact(s) or pointer 
to them - something that can be used for deployment.
* Set up a separate deployment job (can also be Pipeline) that takes in 
parameters for a build run and target environment (stage, QA, UA, PreProd, 
Production, whatever), and grabs artifacts/pointers from the selected run 
and performs a deployment

Now, if you want to get fancy, you make that first "build" job a 
MultiBranch job that builds both develop and some versions of the feature 
branches (I've used /feature/build/* pattern) and then modify the selection 
of the job run to select from multiple branches (need to write a Groovy 
based Parameter selector for that) - and now you can deploy builds from 
feature branches for testing BEFORE they are merged into develop

HTH,

-M







On Wednesday, October 26, 2016 at 4:21:23 AM UTC-7, Sathyakumar Seshachalam 
wrote:
>
> New to Jenkins pipeline.
>
> My process is that developers work off of develop branch (Feature branches 
> and merges of-course).
> At any point in time, a release branch is branched off of develop  and 
> then deployed to a stage environment, Once Accepted/approved, the same 
> release branch is deployed into prod. (All immutable deployments).
>
> So am looking at atleast two stages that are only conditionally and 
> manually entered  - stages being deploy to stg, deploy to prod and 
> condition being the branch prefix. (Each stage will have build steps like 
> deploy binaries, launch, run functional tests etc.,) and an automatic stage 
> that is triggered only once per day (nightly) with build steps like deploy 
> binaries, lunch, run and tear down).
>
> Is this kind of a workflow feasible with pipelines. If yes, Are there any 
> recommendations/suggestions/pointers. 
>
> Thanks,
> Sathya
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/8cfea0f2-bd05-4aac-ab89-21ce5cf21cda%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Constructor or equivalent for "Global Variables" in libraries

2016-10-26 Thread Michael Lasevich
No dice,

groovy.lang.MissingPropertyException: No such property: myList for class: 
myListTest


-M

On Wednesday, October 26, 2016 at 7:42:19 AM UTC-7, Martina wrote:
>
> Try the following:
>
> myList = []
>
>
> def call(item){
>   myList << item
> }
>
> Martina
>
> On Wednesday, October 26, 2016 at 12:24:48 AM UTC-6, Michael Lasevich 
> wrote:
>>
>> So, what is the proper way to initialize the fields in the "Global 
>> Variables" found in the /vars dir in library code?
>>
>> I know it is supposed to be a singleton instantiated on first call, and I 
>> know I can SET new fields by just setting them, but what if I want them to 
>> have default value when object is created? Something like:
>>
>> // vars/myListAdder.groovy
>>
>> def myList = []
>>
>>
>> def call(item){
>>   this.myList << item
>> }
>>
>> I would expect this to work, but it doesn't as it cannot find myList 
>> defined
>>
>> I worked around it for now by using a try/catch to initialize myList on 
>> first call, but that seems wrong. There has got to be a proper way to do 
>> this, but I am not sure what it is - nothing I tried seems to work
>>
>> Thanks,
>>
>> -M
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/9b253148-84ea-479c-b5ba-3ace633c60bb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Is there a way to make jenkins serve up script generated pages?

2016-10-26 Thread Michael Lasevich
I clearly did not read the original post carefully enough

I would examine your XSLT, as it should normally perform pretty fast, if 
written right, but if you really want to do this ahead of time, but still 
be able to redo it whenever XSLT changes, your easiest way to go is to 
write a separate job that takes the output XML from your main job and 
applies XSLT and produce HTML output - then you can re-run it whenever XSLT 
changes (you can even have it automated)

Still seems like easier approach is fix the XSLT to be faster. I would also 
make sure compression is enabled on whatever web-server you are serving 
Jenkins from.

HTH,

-M


On Tuesday, October 25, 2016 at 8:10:26 PM UTC-7, Michael Lasevich wrote:
>
> I think pretty much every browser will do XML+XSL conversion without any 
> plugin, although if you want a "shared" XSL file, you may have to add a 
> stylesheet tag to point to it inside your XML
>
> So all you do is archive your XML with stylesheet tag and a shared 
> location for your XSL file and you are done.
>
> -M
>
> On Wednesday, October 19, 2016 at 5:43:29 PM UTC-7, Jonathan Hodgson wrote:
>>
>>
>>
>> On Thursday, October 20, 2016 at 12:13:01 AM UTC+1, Teichner Peter wrote:
>>>
>>> On Linux you have a tool called xsltproc which basically does the 
>>> transformation. Assuming your Jenkins is Windows you could get something 
>>> similar I'm sure. 
>>>
>>> To display the HTML there is a plugin for Jnkins that will let you link 
>>> in static pages to the job
>>>
>> Thanks, but I think you may have missed the point, perhaos I wasn't cleat 
>> enough.
>>
>> I need this to be dynamic, if I convert the xml amd archive it as an html 
>> file, then it is stuck like that, regardless of improvements I make in the 
>> xslt... which needs a lot of work.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/1aef7b6f-6acc-41c3-a86b-00a62f99b5df%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Constructor or equivalent for "Global Variables" in libraries

2016-10-26 Thread Michael Lasevich
So, what is the proper way to initialize the fields in the "Global 
Variables" found in the /vars dir in library code?

I know it is supposed to be a singleton instantiated on first call, and I 
know I can SET new fields by just setting them, but what if I want them to 
have default value when object is created? Something like:

// vars/myListAdder.groovy

def myList = []


def call(item){
  this.myList << item
}

I would expect this to work, but it doesn't as it cannot find myList defined

I worked around it for now by using a try/catch to initialize myList on 
first call, but that seems wrong. There has got to be a proper way to do 
this, but I am not sure what it is - nothing I tried seems to work

Thanks,

-M

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/14133e6e-b137-43df-92d4-d315b3d19068%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Is there a way to make jenkins serve up script generated pages?

2016-10-25 Thread Michael Lasevich
I think pretty much every browser will do XML+XSL conversion without any 
plugin, although if you want a "shared" XSL file, you may have to add a 
stylesheet tag to point to it inside your XML

So all you do is archive your XML with stylesheet tag and a shared location 
for your XSL file and you are done.

-M

On Wednesday, October 19, 2016 at 5:43:29 PM UTC-7, Jonathan Hodgson wrote:
>
>
>
> On Thursday, October 20, 2016 at 12:13:01 AM UTC+1, Teichner Peter wrote:
>>
>> On Linux you have a tool called xsltproc which basically does the 
>> transformation. Assuming your Jenkins is Windows you could get something 
>> similar I'm sure. 
>>
>> To display the HTML there is a plugin for Jnkins that will let you link 
>> in static pages to the job
>>
> Thanks, but I think you may have missed the point, perhaos I wasn't cleat 
> enough.
>
> I need this to be dynamic, if I convert the xml amd archive it as an html 
> file, then it is stuck like that, regardless of improvements I make in the 
> xslt... which needs a lot of work.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/8a8260d8-f9a0-4fb4-94e4-9f610bf24670%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: I need help someone who knows about this ...

2016-10-25 Thread Michael Lasevich
OK, so this is more than a little creepy

-M

On Tuesday, October 25, 2016 at 5:14:23 PM UTC-7, Jorge Hernandez wrote:
>
> Does anyone have a way to manually remove the MB Pipeline Jobs does not 
> involve restarting the server?
>
> Currently, the only way I know to do is delete the directory on the root 
> filesystem and restart the server which is obviously less than ideal on a 
> busy server.
>
> I suspect there's a way to do through the console wonderful, but have not 
> had time to look into it. I thought I'd ask if anyone here already solved 
> this before the dive.
>
> Beforehand
> Thank you.
>
> I would be very nice to have a user interface for this. I know you can set 
> it to automatically delete scan, but often have to remove a branch, but 
> keep compilations for a while (or even indefinitely) - however, other 
> branches are created by mistake and should be removed immediately
>
> It can also be a good thing to add a function to "retire" old jobs rather 
> than eliminate or good that I think.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/f089d2e4-9820-4f13-806c-bdd7f02e91a0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Pipeline development small window

2016-10-25 Thread Michael Lasevich
You can/should keep it in Git but you can use the "Replay" function to test 
your modifications before committing them (modify in IDE, cut-n-paste to 
Replay window to test, then commit when ready).

I am not sure why you would want to keep it out of git log, that is exactly 
what git log is there for - but you can create a branch, work your changes 
there, then commit as a single change to your normal branches

Lastly, you may want to look into Global Libraries - you can keep those in 
a separate Git Repo, so depending on your needs, this may be one solution.

HTH,

-M



On Tuesday, October 25, 2016 at 2:26:37 PM UTC-7, Sam K wrote:
>
> As the pipeline code gets more and more complicated, I find the tiny 
> window to do the coding very painful.  So, I've been copying and pasting 
> changes from Notepad+/gvim, hit 'Apply', make sure there are no errors and 
> then run the pipeline.  
>
> Is there a better way other than checking this into source control and 
> doing the changes from eclipse or some IDE?  I don't want to clutter all 
> the git logs with these pipeline changes.
>
> Is there going to be a change that will allow making the pipeline window 
> bigger at least?  
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/3ad861c9-8781-4657-8e5c-4e4f455dd1c2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Deleting MB generated jobs without server restart?

2016-10-25 Thread Michael Lasevich
Does anyone have a way of manually deleting the MB Pipeline Jobs that does 
not involve restarting the server?

Currently the only way I know to get it done is to delete the directory on 
the master filesystem and restart the server - which is obviously less than 
ideal on a busy server. 

I suspect there is a way to do it via groovy console, but have not had the 
time to look into it. Figured I'd ask if anyone here already solved this 
before diving in.

Thanks you.

Would be really nice to have a UI for this. I know you can set it to 
auto-delete on scan, but often I need to delete a branch but keep the 
builds for a while (or even indefinately) - yet other branches are created 
by mistake and need to be deleted immediately

Also may be a nice thing to add a feature to "Retire" old jobs rather than 
delete them


-M

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/0fa6069b-06ab-4653-9e39-b2c3babfe383%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: PermGen space

2016-10-25 Thread Michael Lasevich
This error does not appear to be coming from Jenkins (no jenkings classes 
in the stack trace) - So I would examine the job that is being executed and 
check if the maven options you set are actually taking effect. I do not see 
maven classes in the stack trace either, so this may be some tool Maven 
runs as a separate process, so you need to check the args there.

-M

On Tuesday, October 25, 2016 at 9:23:34 AM UTC-7, GBANE FETIGUE wrote:
>
> Hi folks, 
> I am running Jenkins on Centos 6.5 base image and even though I have 
> enough memory I am always having that weird error message below. FYI I have 
> set on configuration system "Global maven OPTS this : -Xmx2048m 
> -XX:-UseGCOverheadLimit -XX:MaxPermSize=512m
> but nothing. any other ideas that might help ?
>
> FATAL: PermGen spacejava.lang.OutOfMemoryError 
> : 
> PermGen space
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
>   at java.lang.Class.getDeclaredMethods(Class.java:1860)
>   at 
> com.google.inject.spi.InjectionPoint.getInjectionPoints(InjectionPoint.java:674)
>   at 
> com.google.inject.spi.InjectionPoint.forInstanceMethodsAndFields(InjectionPoint.java:366)
>   at 
> org.eclipse.sisu.wire.DependencyAnalyzer.analyzeImplementation(DependencyAnalyzer.java:224)
>   at 
> org.eclipse.sisu.wire.DependencyAnalyzer.visit(DependencyAnalyzer.java:122)
>   at 
> org.eclipse.sisu.wire.DependencyAnalyzer.visit(DependencyAnalyzer.java:1)
>   at 
> com.google.inject.internal.UntargettedBindingImpl.acceptTargetVisitor(UntargettedBindingImpl.java:41)
>   at org.eclipse.sisu.wire.ElementAnalyzer.visit(ElementAnalyzer.java:177)
> Finished: FAILURE
>
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/10b446fa-39a5-43c5-a7f2-c8b6ff3bf610%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Pipeline: Check if build parameter exist

2016-09-29 Thread Michael Lasevich
I cheat using try/catch:

try{ echo "MyParam is:"+ myParam} catch(ex) { myParam = "default" }


Its ugly, but works. Obviously you can alter this concept to whatever works 
for you :-)


-M

On Thursday, September 29, 2016 at 1:37:05 AM UTC-7, Sverre Moe wrote:
>
> I can no longer check if a build parameter exist before accessing it
>
> This has worked previously before I updated Jenkins and the Pipeline 
> plugins:
> def myParam = false
> if (getBinding().hasVariable("MY_PARAM")) {
> myParam = Boolean.parseBoolean(MY_PARAM)
> }
>
>
> final def myParam = MY_PARAM
>
>
> The latter will fail on the very first build on all new branches that are 
> added.
>
>
> This will print out only for the second println.
> if (getBinding().hasVariable("MY_PARAM")) {
> println "My_PARAM1="+MY_PARAM
> }
> println "MY_PARAM2="+MY_PARAM
> Output: 
> MY_PARAM2=false
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/e81286b7-10f9-47d1-9b88-cb8effc0da5c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Need CPS/Groovy understanding - problem with setters/getters

2016-09-28 Thread Michael Lasevich
Well, apparently it is a known issue for about a year, with little activity 
:-(

https://issues.jenkins-ci.org/browse/JENKINS-31484

-M

On Wednesday, September 28, 2016 at 6:00:40 PM UTC-7, Michael Lasevich 
wrote:
>
> Can anyone who understands Groovy/CPS look at this and see if this is even 
> resolvable/fixable?
>
> Ok, so after much hair loss and much unpredictable behavior - I have 
> discovered an ugly bug (feature?) that Groovy getters/setters that work 
> fine under regular Groovy, fail miserably under Pipelines/Jenkins. I 
> suspect this is due to CPS, although I have not yet confirmed that, and 
> wrapping it in @NonCPS tag seems to have not affected the problem
>
> Consider this simple Groovy demo bean with a custom getter(same thing 
> happens with setters, btw, I just wanted to keep example simple):
>
> class Bean implements Serializable{
>
>  def name = "unset"
>
>
>   String getName(){
>
>if (this.name == "unset"){
>
>  this.name = "is not set"
>
>}
>
>return this.name
>
>  }
>
> }
>
> and following pipeline code is using it:
>
> import Bean
>
> b = new Bean()
>
> echo("Bean name: "+ b.name)
>
>
> The code here works fine under plain Groovy (replace "echo" with 
> "println") - but under pipelines it blows up. Looking under the hood it 
> appears that the 'return this.name' literally throws CPS for a loop, it 
> is being replaced with "return this.getName()" which causes infinite 
> recursion.
>
> Knowing this, the workaround seems to be to change the internal field name 
> to not match the getter/setter pattern - but that causes much ugliness (you 
> are forced to now have both getter/setter for common usage and your 
> getter/setter and your field name do not match)
>
> My suspicion is that this is a bug in CPS, but not clearly understanding 
> the purpose/benefits of CPS - I am unclear if this is even fixable. Best I 
> can figure is that CPS injects a lot of headaches in exchange for ability 
> to serialize your workflow at any point in time - a benefit which I am not 
> sure I care much about, considering bulk of the builds happens in 
> sub-processes outside of CPS control... My point is that I clearly do not 
> understand enough of what is going on or why it is happening under 
> Pipelines but not under Groovy in general.  Can someone who understands 
> this  better see if this is a bug or something inherent to Groovy/CPS setup?
>
> Thanks,
>
> -M
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/9aef94fb-ef8f-41bb-81d4-44febb57dfad%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Maintaining a unique Build Number across multiple build servers

2016-09-28 Thread Michael Lasevich
There are many approaches to this problem, and I am not sure what your 
setup or requirements are, but here is some information that may be helpful.

* It is important to understand that any jenkins build has two ids - the 
actual immutable build_id - which is always consecutive for a job and 
specific to that job alone, and the build display name which is what you 
see in the UI. You can set the build display name to anything you want from 
within your job. By default it is # (pound symbol) followed by build_id.

* You are usually better off having one build server with many slaves than 
parallel build servers that are independent of each other. This way as long 
as you are building the same job, it does not matter what slave you are 
building on, your build ids will be consecutive and unique.

* If you do need multiple jobs on same master (even if builds are performed 
on any slave), you can write some code that will issue numbers from same 
pool. You can back that pool by anything, the simplest thing being writing 
a file to disk on master. I have implemented this many times via writing my 
own plugin, but these days it is far easier to do this via a pipelines 
shared library (if you are using pipelines, and you should). The simplest 
thing to do is to have a function that takes some id, and for that id it 
issues you next number by incrementing a synchronized counter somewhere. Of 
course this will still keep you on a single Jenkins master...

* If you want to do this across multiple masters as you indicated (and I 
would strongly urge you to reconsider) - you can use the above with some 
sort of a shared storage resource that either supports atomic changes or 
locking - for example you can use a shared MySQL DB. You just have a table 
you lock and increment a counter in while locked (there are many other 
approaches too)

* If you do not care for keeping a shared counter somewhere and do not mind 
longer build ids, a cheap and easy way to go is to assign each jenkins 
server an id number, then use a build name consisting of timestamp followed 
by server id, followed by whatever you want to guarantee it is unique 
within a single server. Your build ids will be long, but will always be in 
a sequential order - even if not consecutive.

* All that said, I would also reconsider overall value of consecutive build 
numbers - in the end you probably only care about what branch it was done 
from and which commit it was done from. If you use a SCM that provides you 
with a sequential commit id (SVN is awesome for this) - just use the commit 
id as your build indicator. If you are using git or any other 
non-sequential commit SCM - you can just include the hash of the SCM on 
build and to get sequence, you can cheat a little by also counting commits 
in the log. Last bit is not exactly great as git allows you to rewrite your 
commit history, but if you can lock it down, it is reasonably functional 
indicator.

Good luck,

-M

On Monday, September 19, 2016 at 12:54:15 PM UTC-7, Robert Kruck wrote:
>
>  Is it possible to preserve the integrity of build numbers (NO DUPLICATES 
> and build numbers in order) while building in multiple Jenkins build 
> servers?
>
> If this capability exists in Jenkins, what Jenkins plugins are required, 
> and what versions of Jenkins itself, and of the required Jenkins plugins, 
> are needed?
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/84b32adc-04d5-417f-b0a6-01469c13ed5f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Need CPS/Groovy understanding - problem with setters/getters

2016-09-28 Thread Michael Lasevich
Can anyone who understands Groovy/CPS look at this and see if this is even 
resolvable/fixable?

Ok, so after much hair loss and much unpredictable behavior - I have 
discovered an ugly bug (feature?) that Groovy getters/setters that work 
fine under regular Groovy, fail miserably under Pipelines/Jenkins. I 
suspect this is due to CPS, although I have not yet confirmed that, and 
wrapping it in @NonCPS tag seems to have not affected the problem

Consider this simple Groovy demo bean with a custom getter(same thing 
happens with setters, btw, I just wanted to keep example simple):

class Bean implements Serializable{

 def name = "unset"


  String getName(){

   if (this.name == "unset"){

 this.name = "is not set"

   }

   return this.name

 }

}

and following pipeline code is using it:

import Bean

b = new Bean()

echo("Bean name: "+ b.name)


The code here works fine under plain Groovy (replace "echo" with "println") 
- but under pipelines it blows up. Looking under the hood it appears that 
the 'return this.name' literally throws CPS for a loop, it is being 
replaced with "return this.getName()" which causes infinite recursion.

Knowing this, the workaround seems to be to change the internal field name 
to not match the getter/setter pattern - but that causes much ugliness (you 
are forced to now have both getter/setter for common usage and your 
getter/setter and your field name do not match)

My suspicion is that this is a bug in CPS, but not clearly understanding 
the purpose/benefits of CPS - I am unclear if this is even fixable. Best I 
can figure is that CPS injects a lot of headaches in exchange for ability 
to serialize your workflow at any point in time - a benefit which I am not 
sure I care much about, considering bulk of the builds happens in 
sub-processes outside of CPS control... My point is that I clearly do not 
understand enough of what is going on or why it is happening under 
Pipelines but not under Groovy in general.  Can someone who understands 
this  better see if this is a bug or something inherent to Groovy/CPS setup?

Thanks,

-M


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/f977f049-8b7e-440d-ab4f-3bf2f621abf1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: The new Global Pipeline Library

2016-09-21 Thread Michael Lasevich
I am making an assumption that "Modern SCM" is some new Jenkins SCM API not 
yet supported by most common SCMs - so I am just ignoring it and using 
Legacv SCM

As for ${library.name.version} - you use it if you specify branch/tag in 
GIT or in URL for SVN

Basically when you load the library with @Library syntax, you can (if 
enabled) specify a version (release, beta, v2, whatever you want to call 
it). When SCM fetches it, it needs to know what to do with this "version" 
you specified. So, for SVN you include ${library.name.version} in your SVN 
URL in global library config, so when it checks it out, it will check out 
the version specified in your @Library statement. Similarly for git, which 
has a fixed URL for all branches, you specify fixed URL but use the version 
variable in your branch specifier (which can point to either a branch or a 
tag) 

So for example for my "testlib" library, my branch specifier is : 
'refs/heads/release/${library.testlib.version}'

So when I do '@Library "testlib@beta" _' in my code, I get branch 
release/beta

Here is a tiny trick I do here, btw --  notice that I hardcode the 
"release/" prefix to my branch - I do this to prevent the scripts from 
specifying any branch other than the one with 'release/' prefix - this 
prevents developers working on new features from being able to load those 
features into Jenkins (which for global libs runs outside the sandbox) 
without a proper peer review (release/*" branches are restricted in my 
config).

HTH,

-M




On Monday, September 12, 2016 at 12:48:02 AM UTC-7, Sverre Moe wrote:
>
> There has been changes in the Global Pipeline Library plugin
> https://issues.jenkins-ci.org/browse/JENKINS-31155
>
> Jenkins Configuration => Global Pipeline Libraries
> Choosing "Modern SCM" the next drop down list is empty. What does this 
> option actually do?
> Choosing "Legacy SCM" I'm able to define a git repository for the library.
>
>
> 
>
>
> 
>
>
>
>
> According to the documentation:
>
>> It can be used in two modes: a legacy mode in which there is a single Git 
>> repository hosted by Jenkins itself, to which you may push changes; and a 
>> more general mode in which you may define libraries hosted by any SCM in a 
>> location of your choice.
>
>
> Should it not be the other way around for the modes in configuration? 
> Modern SCM should have option to add SCM.
>
>
>
> The best way to specify the SCM is using an SCM plugin which has been 
>> specifically updated to support a new API for checking out an arbitrary 
>> named version (Modern SCM option). Initially the Git and Subversion plugins 
>> have this modification
>
> Choosing Modern SCM there is no option in the drop down list. Neither git 
> nor subversion. Though they are with Legacy SCM.
>
>
>
> If your SCM plugin has not been integrated, you may select Legacy SCM and 
>> pick anything offered. In this case, you need to include 
>> ${library.yourLibName.version} somewhere in the configuration of the SCM, 
>> so that during checkout the plugin will expand this variable to select the 
>> desired version.
>
> For a library in a git repository would I need to tag a specific version?
>
>
> ---
>
>
> It is great that we now can use a shared library stored with our own Git 
> remote repository server. However until we transition to that have 
> workflowLibs.git work as before.
>
>
> If I move the shared library to a different SCM I would probably need to 
> remove workflowLibs.git otherwise how would Jenkins know which one to load 
> as I need to load it implicitly.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/3055d52a-078d-4c25-8f32-b0241e5b7d71%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Jenkins Integration with FreeIPA LDAP

2016-09-20 Thread Michael Lasevich
Out of curiosity, have you gotten the groups to work in this config?

I have this same setup working, but I can only see groups IFF the user 
already has admin rights  (which is very backwards and useless, as groups 
are mostly meaningless if you are already admin). I opened a bug with LDAP 
plugin (https://issues.jenkins-ci.org/browse/JENKINS-37858)

-M

On Tuesday, September 20, 2016 at 9:49:05 AM UTC-7, Neil White wrote:
>
> I'm running Jenkins 2.21 and I got it running on LDAP with only the 
> following details.
> This is from the config.xml, which you can translate into the frontend.
>
> ipa.example.com
> dc=example,dc=com
> false
> cn=users,cn=accounts
> uid={0}
> cn=groups,cn=accounts
>
> memberOf=cn=jenkins,cn=groups,cn=accounts,dc=example,dc=com
>  class="jenkins.security.plugins.ldap.FromGroupSearchLDAPGroupMembershipStrategy">
>   
> 
> uid=jenkins,cn=sysaccounts,cn=etc,dc=example,dc=com
>
> TRLkkCtAA1X2hAyqXXXOsJz8Q3txUCTprcl/qTItIFNDrR5x7
> false
> displayname
> mail
> 
> 
> 
>
>
>
>
> On Saturday, September 19, 2015 at 1:03:25 PM UTC+2, Yogesh Sharma wrote:
>>
>> Hi List,
>>
>> I am trying to integrate Jenkins with FreeIPA LDAP. Configuration is done 
>> and seems to be OK as there is no error. However, I am not able to 
>> authenticate into the Jenkins using FreeIPA LDAP users.
>>
>> Jenkins logs does not say anything. Tried adding Log Level:
>>
>> org.acegisecurity.providers.ldap.authenticator,org.acegisecurity.providers.ldap.LdapAuthenticationProvider
>>  
>> (WARNING) but does not help.
>>
>> Below is LDAP Config in Jenkins:
>>
>>
>>   root DN [image: Help for feature: root DN] 
>> 
>>   Allow blank rootDN 
>>   User search base [image: Help for feature: User search base] 
>> 
>>   User search filter [image: Help for feature: User search filter] 
>> 
>>
>>  Case sensitivity...
>>   Group search base [image: Help for feature: Group search base] 
>> 
>>   Group search filter [image: Help for feature: Group search filter] 
>> 
>>   Group membership 
>>  Parse user attribute for list of groups 
>>  Search for groups containing user 
>>   Group membership filter 
>>   Manager DN [image: Help for feature: Manager DN] 
>> 
>>   Manager Password [image: Help for feature: Manager Password] 
>> 
>>   Display Name LDAP attribute [image: Help for feature: Display Name 
>> LDAP attribute] 
>>   Email Address LDAP attribute [image: Help for feature: Email Address 
>> LDAP attribute] 
>>   Disable Ldap Email Resolver 
>> Enable cache [image: Help for feature: Enable cache] 
>> 
>>   Environment Properties 
>> Add
>> [image: Help for feature: Environment Properties] 
>> 
>>  Login with Google [image: Help for feature: Login with Google] 
>> 
>>  PWauth Authentication [image: Help for feature: PWauth Authentication] 
>> 
>>  Unix user/group database [image: Help for feature: Unix user/group 
>> database] 
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/00d524dc-7f5d-4792-927f-3d3d173ed5a3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Is adding private key to Jenkins credential list a security breach?

2016-09-20 Thread Michael Lasevich
You got it backwards, master connects to Slave using standard SSH 
Pub/Private key auth. So, since the master is connecting to slave, you are 
not putting Slave's private keys on Master, you are putting Master's public 
key on Slave. While this looks like same thing physically, logically it 
explains why private key belongs to Master. Of course for extra 
convenience, you can use different keypairs for different slaves - but that 
is optional.

-M


On Sunday, September 18, 2016 at 6:57:58 PM UTC-7, John Cho wrote:
>
> Hi,
> I am reading thru how to set up slaves on Jenkins using ssh keys.   Read 
> thru about three articles on how to do that.   According to them, the setup 
> is based upon using the slave as a ssh server with public and private keys 
> and it adds the slave's private key to the Jenkins master's credential 
> instead of the slave's public key.  Private key should never be shared.   
> Any thought on this practice?   Or, am I missing any?   Thanks in advance.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/f94c8cf5-145d-40bf-b3a2-745f83bd7570%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Pipeline external global library in SVN - errors

2016-09-16 Thread Michael Lasevich
You are probably farther along than me as I have not even looked at the 
code, but what I am observing on my end is this:

There is a some sort of a shared location, in my case "workspace@libs/" 
directory, which seems to house the repo (note that name of the library is 
not there), and then inside individual builds there is a lib// 
directory which contains the snapshot in time of the library that is 
actually used for this build. I am using git, so there is a git repo in the 
first place, but only the files in the build - but I suspect there is a 
similar thing for svn

A few things that I see here that are different in your case:

1 - Your workspace is not named "workspace@libs" but is instead named 
"x-pipeline-1@libs" - not sure if this is a per-build directory, or a 
shared repo dir
2 - If #1 is your shared directory,  you have an extra subdirectory in 
there.

Try searching for "cmd.groovy" under your job directory and see where all 
the copies are. See where there is a .svn dir vs where there is just a copy 
and make sure that it is in the right place.

-M


On Friday, September 16, 2016 at 5:03:13 PM UTC-7, Brian Ray wrote:
>
> All good tips. I've been blowing away the local copy between attempts and 
> currently have this structure (note: I changed the configured libname from 
> *helpers 
> *to *pipelineGlobalHelpers *in the latest attempt, and this is checking 
> out on my local troubleshooter master right now--this is even before the 
> first *stage *or *node *blocks):
>
> c:\Jenkins\workspace\Dev-Snippets\x-pipeline-1@libs\pipelineGlobalHelpers>
> dir
>  Volume in drive C has no label.
>  Volume Serial Number is 4456-EE0F
>
>  Directory of c:\Jenkins\workspace\Dev-Snippets\x-pipeline-1@libs\
> pipelineGlobalHelpers
>
> 2016-09-16  16:18  .
> 2016-09-16  16:18  ..
> 2016-09-16  16:18  resources
> 2016-09-16  16:18  src
> 2016-09-16  16:18  vars
>0 File(s)  0 bytes
>5 Dir(s)  249,994,575,872 bytes free
>
>
> This 
> <https://github.com/jenkinsci/workflow-cps-global-lib-plugin/blob/1b70381dbda34e6fd9acb15b4c206e9aec75c965/src/main/java/org/jenkinsci/plugins/workflow/libs/LibraryAdder.java#L147-L183>
>  
> is likely the method that's complaining; see the *throw *at the end.
>
> The thing that I am puzzling at is the construction of the directory path:
>
> FilePath libDir = new FilePath(execution.getOwner().getRootDir()).child(
> "libs/" + name);
>
> That to me suggests it yields 
> c:/Jenkins/workspace/Dev-Snippets/x-pipeline-1/libs/pipelineGlobalHelpers, 
> instead of .../x-pipeline-1@libs/pipelineGlobalHelpers, but maybe I am 
> misunderstanding *FilePath#child* or *#getRootDir*.
>
> There are still a few more experiments to try.
>
>
> On Friday, September 16, 2016 at 4:28:05 PM UTC-7, Michael Lasevich wrote:
>>
>> Implicit load was to work around issues with '@Library' syntax, but I 
>> doubt that is your issue here.
>>
>> I would check that your SVN URL is pointing to directory that has "vars" 
>> in it and double-check that it is checking out the right dir. Look for 
>> "vars" dir in  /workspace@libs/ dir in yours job and/or 
>> "/builds/##/libs/helpers/" inside your specific build)
>>
>> I would also maybe wipe the local cache of the svn repo and force a full 
>> checkout again.
>>
>> -M
>>
>> On Friday, September 16, 2016 at 4:12:43 PM UTC-7, Brian Ray wrote:
>>>
>>>
>>> <https://lh3.googleusercontent.com/-lo8y6lCOagw/V9x6kpax5UI/AQo/z44qH8va24Y22p7rhBE6kwnpMvMUj3MCgCLcB/s1600/firefox_2016-09-16_15-53-14.png>
>>>
>>> You beat me to the post. The hyphen in the lib name *is* causing the 
>>> failure to interpolate, much the same as it would in Groovy GString 
>>> interpolation. (Though I think the mechanism here is different, since 
>>> plugins are written in Java ... )
>>>
>>> The above lib config fixed the interpolation problem. But the vars/ and 
>>> src/ subdirectory discovery issue pops up again.
>>>
>>> Started by user Brian Ray <http://cic-qa-ber:8080/user/brian66481>
>>> Loading library helpers@trunk
>>> Updating 
>>> https://XX/svn/releng/trunk/retools/pipeline-global-libs/pipeline-global-helpers
>>>  
>>> <https://cic-svr-svn01.landacorp.local:18080/svn/releng/trunk/retools/pipeline-global-libs/pipeline-global-helpers>
>>>  at revision '2016-09-16T15:50:21.511 -0700'
>>> At revision 85052
>>>
>>> No changes for 
>>> https://

Re: Pipeline external global library in SVN - errors

2016-09-16 Thread Michael Lasevich
Implicit load was to work around issues with '@Library' syntax, but I doubt 
that is your issue here.

I would check that your SVN URL is pointing to directory that has "vars" in 
it and double-check that it is checking out the right dir. Look for "vars" 
dir in  /workspace@libs/ dir in yours job and/or 
"/builds/##/libs/helpers/" inside your specific build)

I would also maybe wipe the local cache of the svn repo and force a full 
checkout again.

-M

On Friday, September 16, 2016 at 4:12:43 PM UTC-7, Brian Ray wrote:
>
>
> <https://lh3.googleusercontent.com/-lo8y6lCOagw/V9x6kpax5UI/AQo/z44qH8va24Y22p7rhBE6kwnpMvMUj3MCgCLcB/s1600/firefox_2016-09-16_15-53-14.png>
>
> You beat me to the post. The hyphen in the lib name *is* causing the 
> failure to interpolate, much the same as it would in Groovy GString 
> interpolation. (Though I think the mechanism here is different, since 
> plugins are written in Java ... )
>
> The above lib config fixed the interpolation problem. But the vars/ and 
> src/ subdirectory discovery issue pops up again.
>
> Started by user Brian Ray <http://cic-qa-ber:8080/user/brian66481>
> Loading library helpers@trunk
> Updating 
> https://XX/svn/releng/trunk/retools/pipeline-global-libs/pipeline-global-helpers
>  
> <https://cic-svr-svn01.landacorp.local:18080/svn/releng/trunk/retools/pipeline-global-libs/pipeline-global-helpers>
>  at revision '2016-09-16T15:50:21.511 -0700'
> At revision 85052
>
> No changes for 
> https://XX/svn/releng/trunk/retools/pipeline-global-libs/pipeline-global-helpers
>  
> <https://cic-svr-svn01.landacorp.local:18080/svn/releng/trunk/retools/pipeline-global-libs/pipeline-global-helpers>
>  since the previous build
> ERROR: Library helpers expected to contain at least one of src or vars 
> directoriesorg.codehaus.groovy.control.MultipleCompilationErrorsException 
> <http://stacktrace.jenkins-ci.org/search?query=org.codehaus.groovy.control.MultipleCompilationErrorsException>:
>  startup failed:
> WorkflowScript: Loading libraries failed
>
> 1 error
>
>   at 
> org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
>  [...]
>
>
>
> I bet there is something finicky about the root directory of the SVN 
> working copy. But I will give *Load implicitly* a shot too.
>
> Thanks
>
> On Friday, September 16, 2016 at 3:31:07 PM UTC-7, Michael Lasevich wrote:
>>
>> Is it possible that this is some odd issue with '-' symbols in the 
>> library name? I would try a simpler name. Also try to implicit load to 
>> simplify the load...
>>
>> -M
>>
>> On Friday, September 16, 2016 at 1:16:05 PM UTC-7, Brian Ray wrote:
>>>
>>> Evidently I cannot drive this post widget very well. The screenshots are 
>>> best clicked in reverse order, with the last two corresponding to the 
>>> *First 
>>> Try*, the middle two to *Second Try*, and the first two to *First Try.*
>>>
>>>>
>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/5b85187a-21f1-42c0-91d1-6339beed9560%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Pipeline external global library in SVN - errors

2016-09-16 Thread Michael Lasevich
Is it possible that this is some odd issue with '-' symbols in the library 
name? I would try a simpler name. Also try to implicit load to simplify the 
load...

-M

On Friday, September 16, 2016 at 1:16:05 PM UTC-7, Brian Ray wrote:
>
> Evidently I cannot drive this post widget very well. The screenshots are 
> best clicked in reverse order, with the last two corresponding to the *First 
> Try*, the middle two to *Second Try*, and the first two to *First Try.*
>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/ae55a91e-d7e7-4649-bf4e-bb03e0c1176a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Any way to manually delete a specific branch in a MB pipeline job?

2016-09-15 Thread Michael Lasevich
We have MB pipeline job and we do not wish to delete old jobs even when 
branches are deleted (for historical use) - however sometimes someone 
creates a wrong branch and deletes it, and we are left with an orphaned 
job. If I enable cleanup, it will delete ALL jobs without branches - not 
what we wish to do - but I see no other way to delete a branch job. Any 
suggestions?

Thanks,

-M

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/07b6ef61-0225-4ba3-9107-9b341f8f438a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Favorites View not working properly for MultiBranch Pipelines

2016-09-07 Thread Michael Lasevich

Just noticed:  it appears that there is an issue between favorites view and 
MultiBranch pipelines - 

If I mark the MB parent job as favorite, it properly shows up in Favorites 
view, however if I mark one of the branches, it does not. It does, however, 
show up in list of favorites in the user profile editing screen - so the 
job IS marked as "Favorite"

It works fine for manually "foldered" jobs - so this issue is specific to 
MB Pipelines

Using:
* Jenkins 2.7.3
* Favorites 1.16
* Favorites View 1.0 
* Pipeline: Multibranch 2.8

In View  config I do have selected "Recurse in subfolders" and Regex ".*"

-M


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/973e31c6-c91a-4d36-addf-d3559ba3c3fa%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: How can I host a private Jeinkins Plugin Repo

2016-09-01 Thread Michael Lasevich
I am not sure how much more control you want, but all you need is to 
generate two JSON files and host them on a web server along with the 
plugins - should be trivial to do. But why duplicate the work if it is 
already done for you

-M

On Thursday, September 1, 2016 at 1:37:36 PM UTC-5, rudy...@gmail.com wrote:
>
> Thanks but not quite what I'm looking for.  I want more controll, e.g 
> either write my own or find the source code for the official Jenkins Update 
> Site.
>
> On Monday, August 29, 2016 at 12:28:22 PM UTC-4, rudy...@gmail.com wrote:
>>
>> For reasons I won't go into here,  I'd like to be able to host my own 
>> Jenkins Plugin repo and then configure my Jenkins server to use it. I have 
>> not been able to find any information on how to do this.  Any help would be 
>> appreciated
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/16605d68-affa-4dd2-b05d-77fb79bdb076%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Do not delete a completed job workspace under all conditions

2016-09-01 Thread Michael Lasevich
Workspaces are meant to be ephemeral and can be deleted/removed at any 
time.  Counting on same workspace being there for every build is a bad idea 
for many reasons - not the least of which is that the point of using a 
build system like Jenkins is to have a reliable and repeatable build - if 
you depend on some old data from another build and cannot function without 
it - you are creating very unstable builds.

You can use workspace as a place to cache things  (git repo, for one) so 
that you do not have to download them each time, but you have to verify 
they are what you expect and re-fetch them if they are not.
Otherwise, you need to create artifacts from builds if you want to keep the 
data.

-M

On Thursday, August 25, 2016 at 10:15:43 AM UTC-5, Robert Beddow wrote:
>
> Hi,
>
> I'm trying to find a way to keep a workspace and prevent it from being 
> deleted by Jenkins.
>
> If I have a project called My_Jenkins_Job, and I run it multiple times 
> concurrently, I end up with directories in the form:
>
> ./jenkins/workspace/My_Jenkins_Job
> ./jenkins/workspace/My_Jenkins_Job@2
> ./jenkins/workspace/My_Jenkins_Job@3
> ./jenkins/workspace/My_Jenkins_Job@4
>
> If I later run My_Jenkins_Job again, and some of the above are finished, 
> the lowest value workspace is removed and the new job runs under the same 
> directory path.
>
> e.g. 
> Running: ./jenkins/workspace/My_Jenkins_Job
> Finished: ./jenkins/workspace/My_Jenkins_Job@2
> Finished: ./jenkins/workspace/My_Jenkins_Job@3
> Running: ./jenkins/workspace/My_Jenkins_Job@4
>
> I start a new My_Jenkins_Job, ./jenkins/workspace/My_Jenkins_Job@2 is 
> deleted, then the new job will run under a new 
> ./jenkins/workspace/My_Jenkins_Job@2.
>
> My request is to find out how to occasionally force jenkins to skip a 
> finished directory because I want to keep it.
>
> So:
> Running: ./jenkins/workspace/My_Jenkins_Job
> Finished & keep: ./jenkins/workspace/My_Jenkins_Job@2
> Finished: ./jenkins/workspace/My_Jenkins_Job@3
> Running: ./jenkins/workspace/My_Jenkins_Job@4
>
> and I start a new My_Jenkins_Job, ./jenkins/workspace/My_Jenkins_Job@2 is 
> skipped, ./jenkins/workspace/My_Jenkins_Job@3 is deleted, then the new job 
> will run under a new ./jenkins/workspace/My_Jenkins_Job@3.
>
> The options I've seen/thought of are:
> Custom Workspace - set a custom workspace for each run. This isn't what I 
> need, as normally I want standard behaviour, i.e. cycle through the 
> workspaces replacing them as they finish. Also, I may decide to keep a 
> workspace after the build has started.
>
> Archive workspace - some of the paths in the job are absolute. I believe 
> that archiving the workspace will move it to another parent directory, 
> which would break all the full paths. Also, this could only be enabled up 
> front
>
> Force the job to keep "building" even when it is finished. This doesn't 
> work if jenkins is restarted, and again it can only be enabled up front.
>
> The only solution that I can think of is to "touch" a file within the 
> workspace e.g.:
> ./jenkins/workspace/My_Jenkins_Job@2/.jenkins_keepme
> And jenkins treats that workspace directory as if it is still building. 
> This way I can manually choose to add the file at any time. Also it works 
> if jenkins is restarted.
>
> Is there anything like this out there? Or does anyone have any suggestions 
> of how else I could get the functionality I'm looking for?
>
> Thanks,
> Robert
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/bd14adb4-cb02-42df-85bb-36f3c5b2c81a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Multibranch triggers configuration

2016-09-01 Thread Michael Lasevich
Worth mentioning that the MB "build" is not a build of the code, but 
indexing of the repo for branch status. So if you schedule it to build 
every 5 minutes, it will re-index the repo every 5 minutes - but only build 
the branches that have changes in them and only when there is a change (or 
however you configure it)

-M

On Wednesday, August 31, 2016 at 11:09:55 AM UTC-5, Spamme wrote:
>
> Hello, I was testing the multibranch job in jenkins (1.642.4) because I 
> wanted to trigger a (visual studio) build after each commit in a branch, 
> because it happens very often that our developers commit broken code. 
> I had configured the multibranch project with the jenkinsfile and it 
> builds, 
> after I wanted to trigger the build with a web hook from visualsvn (I hope 
> to switch soon to github), but I discovered later that this feature is 
> broken (https://issues.jenkins-ci.org/browse/JENKINS-33020). The other 
> two 
> available options are: "Build Periodically", what is a bad idea and 
> "Periodically if not otherwise run", which doesn't seem to better than the 
> former. There also no SCM poll option. Therefore I'm quite puzzled, how 
> should be used this multibranch job? How it can be configured to build a 
> branch as soon as something is committed to the SCM? Because the only 
> trigger that could do it is broken since months, therefore there should be 
> another way to achieve it, or not? 
>
>
>
> -- 
> View this message in context: 
> http://jenkins-ci.361315.n4.nabble.com/Multibranch-triggers-configuration-tp4838717.html
>  
> Sent from the Jenkins users mailing list archive at Nabble.com. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/ac366e66-255e-4531-8c61-22b785ab25a4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: LDAP groups and Role Based Authorization no playing nice.

2016-09-01 Thread Michael Lasevich
For what its worth, I believe this is an issue with LDAP Plugin as I was 
able to recreate it without Role Based auth using Matrix based auth as well

>From a little digging I did, it appears it is some odd permissions issue, 
as if you grand the user explicit admin rights ahead of time, it can read 
the groups - but any other kind of user fails. This of course makes LDAP 
Groups completely useless as only admins can see them.  I filed a ticket 
with LDAP Plugin team: https://issues.jenkins-ci.org/browse/JENKINS-37858

-M

On Monday, August 15, 2016 at 3:59:56 PM UTC-5, Michael Lasevich wrote:
>
> I am trying to do something I thought I have done many times before, but 
> it is not working now - using Roles based Authorization with LDAP 
> authentication and specifically LDAP Groups
>
> I believe I have LDAP Authentication setup and working  for both users and 
> groups
> I believe I have Role based authentication set up.
>
> Granting roles to LDAP users directly - either global or project roles - 
> works. I can login with LDAP user and get expected permissions. Granting 
> roles to 'authenticated' also seems to work.
>
> However if I grant permissions to LDAP group - it just does not work. 
>
> I am very confused why assigning roles to groups does not work.
>
> Few thoughts and observations: 
>
> * "Assign Roles" UI recognizes LDAP Groups and shows a group icon next to 
> them.
>
> * "User status" UI (/user/username URI) shows groups for the use and I 
> even ran that LDAP test groovy script that worked as expected. Although...
>
> * "User Status" only shows groups to "admin" user. A regular use with just 
> access to run specific jobs does not see their own groups - perhaps 
> something is blocking non-admin users from reading their own groups?
>
> * Increasing logging shows that a user that was granted admin rights 
> directly has all the groups in the "Granted Authorities" but non-admin user 
> only has "authenticated" - interestingly enough admin user does NOT have 
> 'authenticated'...
>
> * Don't think it is relevant here, but in the past I recall having to do a 
> special prefix for groups (like '@' I think) - not sure if this is still 
> necessary
>
>
> Versions -- Running this on:
>
> * Jenkins 2.10
> * LDAP Plugin 1.12
> * Role Based Authorization Strategy 2.3.2
>
> Any thoughts or suggestions would be appreciated
>
> Thanks,
>
> -Michael
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/88b7ff85-7488-48e6-9152-5da465e1c781%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: GitLab integration: web hooks for GitLab CE with Jenkins multi-branch project?

2016-08-29 Thread Michael Lasevich
This is probably more of a GitLab than Jenkins discussion but I think 
GitLab can send WebHooks to anything, so it should be able to trigger 
Jenkins jobs with generic WebHook and I believe there is a way to have 
Jenkins accept WebHooks. That said, this only works when there is a direct 
 network connection between GitLab and Jenkins.  GitHub has a brilliant 
alternative using SQS queue for triggering jobs in Jenkins - which allows 
triggering of jobs without direct connection between the two (as long as 
both can talk to SQS). I have not found an equivalent for GitLab and fear I 
would either have to write a WebHook to SQS bridge (easy, but not great in 
terms of security/reliability) or write a native GitLab plugin (no idea 
what that entails) :-/

-M 



On Thursday, August 25, 2016 at 11:04:44 PM UTC-7, ST wrote:
>
> Hi,
>
> Is there a way to make push event trigger the build for the associated 
> branch in a Jenkins multi-branch project, using GitLab 8.x Community 
> Edition (CE)?
>
> The GitLab documentation about Jenkins integration
>   http://docs.gitlab.com/ee/integration/jenkins.html 
> 
> mentions that the Jenkins GitLab Hook Plugin is deprecated in favor of the 
> Jenkins GitLab Plugin. However since there is "...gitlab.com/ee/..."  in 
> that URL it is unclear to me whether GitLab CE is supported at all, and 
> how/where one is supposed to do the configuration?
>
> Anyone using GitLab CE and was able to configure web hooks for 
> multi-branch project? Or is the only solution in this case to use "SCM 
> polling"?
>
> Thanks!
>  stefan.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/27087c3d-76cd-4c75-a1ec-13157b68e129%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Parametrized MultiBranch Pipeline Jobs?

2016-08-26 Thread Michael Lasevich
In case anyone is searching here is my own answer to this question - there 
is a 'parameter' command in the Pipeline DSL designed for this exact 
purpose, but there are some gitcha's

* First, the code generator generates broken code - so do not trust it. 
Luckily main issue is easy to fix by replacing the "parameter [...]" syntax 
with "parameter([...])" syntax - but there are other issues - so doublecheck

* Then there is the fact that the request for parameters is set from the 
job itself, so essentially you are setting parameters for the NEXT job, not 
for the one that is running. This actually makes sense, once you wrap your 
head around it

* Ans this is a very nasty one - if you make a mistake in your parameter() 
call that leads to invalid  config (for example a Choice parameter with no 
choices) you are in trouble. Because to fix it, you have to run a job and 
the job will not run until you fix the problem. I ended up having to delete 
the entire MB job to fix it.

-M


On Friday, August 26, 2016 at 12:52:49 PM UTC-7, Michael Lasevich wrote:
>
> Is it possible to add parameters to a multibranch pipeline job? 
>
> I am happy with the job running with some sorts of default values when 
> running automatically, but it would be nice to be able to have Jenkinsfile 
>  (or even manually) define parameters that could be optionally provided 
> when running the job manually.
>
> Thanks,
>
> -M
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/1e584aef-6a59-4044-80d8-38854230f83f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Parametrized MultiBranch Pipeline Jobs?

2016-08-26 Thread Michael Lasevich
Is it possible to add parameters to a multibranch pipeline job? 

I am happy with the job running with some sorts of default values when 
running automatically, but it would be nice to be able to have Jenkinsfile 
 (or even manually) define parameters that could be optionally provided 
when running the job manually.

Thanks,

-M


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/0e246519-cc41-4222-b4c8-8036a94c0c50%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Archive multiple artifacts with same name

2016-08-26 Thread Michael Lasevich
Have you considered creating a temp subdirectory with identity of your OS 
(can be generated automatically) and then moving your artifacts to that 
directory and archiving the directory? You end up with artifacts with same 
names in different directories - easy to browse and link to, and does what 
you want.

Something like this(untested):

temp="to_archive"
sh """
  dir="${temp}/\$(lsb_release -si)-\$(lsb_release -sr)-\$(uname -m)"
  mkdir -p "\${dir}"
  cp *.rpm "${dir}"
"""
dir(temp){ archive '**' }


-M

On Monday, June 13, 2016 at 1:14:39 AM UTC-7, Sverre Moe wrote:
>
> As I am building on multiple slave nodes I get RPM artifacts from each 
> node. Building on 4 64bit Linux OS, I will get 4 distinct artifacts with 
> the same name. Using ArtifactArchiver will only archive one of those 
> distinct archives and overwrite the previous archived artifact. Considering 
> since each OS may have different versions of libraries one single artifact 
> may not work on all the OS.
>
> Is there a way around this problem that will allow me to archive 4 
> artifacts with same name?
> I am using Jenkins Pipeline and performing the following step
> step([$class: 'ArtifactArchiver', artifacts: '*.rpm', excludes: null, 
> fingerprint: true, onlyIfSuccessful: true])
>
> When I was previously using Multi-configuration builds, this was not a 
> problem since each configuration would show their own artifacts.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/52524b57-2177-4331-9e3e-e0a3fabea821%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Multibranch Pipelines, push, and ignoring commits in certain paths

2016-08-25 Thread Michael Lasevich
We have a setup where git server (Stash) notifies Jenkins server via 
WebHook of commits in certain branches and a MB job executes builds on 
branches matching the change 

This works fine, but we would like to ignore changes not affecting the 
build - so only react to changes in certain files/directories. Git Jenkins 
plugin appears to offer this via "Additional Behavior" called "Polling 
ignores commits in certain paths" - but having this configured, it does not 
appear to work in this scenario (build still happens, even when the change 
is outside of the included paths) - Should this work and do I need to do 
anything else to get it to work?


I realize that plugin is for Polling and we are using WebHook/PUSH - but as 
I understand (and I may be wrong) webhook/push builds simply trigger 
immediate polling mechanism - or is this not the case with push for 
multibranch/pipelines?

Thank you, 

-M

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/7431a91f-73e0-48b4-b53c-e2192b5b3858%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: LDAP groups and Role Based Authorization no playing nice.

2016-08-17 Thread Michael Lasevich
So, was this broken at some later time on purpose? I could have sworn I 
have used this functionality in the past.

Is this a problem in Role plugin or LDAP plugin? You mention Role plugin, 
but Role plugin is clearly recognizing the group for admin - it seems like 
there is a security problem in LDAP plugin that prevents it from reading 
the groups for non-admin users.

-M




On Wednesday, August 17, 2016 at 5:25:08 AM UTC-7, Indra Gunawan (ingunawa) 
wrote:
>
> LDAP group never works with the Role Based Authorization plugin.  Only the 
> CloudBee paid version of Role based plugin combined with Folder plugin on 
> Enterprise Jenkins are made to work with LDAP group.
>
> -Indra
>
> From: <jenkins...@googlegroups.com > on behalf of Michael 
> Lasevich <mlas...@gmail.com >
> Reply-To: "jenkins...@googlegroups.com " <
> jenkins...@googlegroups.com >
> Date: Monday, August 15, 2016 at 1:59 PM
> To: Jenkins Users <jenkins...@googlegroups.com >
> Subject: LDAP groups and Role Based Authorization no playing nice.
>
> I am trying to do something I thought I have done many times before, but 
> it is not working now - using Roles based Authorization with LDAP 
> authentication and specifically LDAP Groups 
>
> I believe I have LDAP Authentication setup and working  for both users and 
> groups
> I believe I have Role based authentication set up.
>
> Granting roles to LDAP users directly - either global or project roles - 
> works. I can login with LDAP user and get expected permissions. Granting 
> roles to 'authenticated' also seems to work.
>
> However if I grant permissions to LDAP group - it just does not work. 
>
> I am very confused why assigning roles to groups does not work.
>
> Few thoughts and observations: 
>
> * "Assign Roles" UI recognizes LDAP Groups and shows a group icon next to 
> them.
>
> * "User status" UI (/user/username URI) shows groups for the use and I 
> even ran that LDAP test groovy script that worked as expected. Although...
>
> * "User Status" only shows groups to "admin" user. A regular use with just 
> access to run specific jobs does not see their own groups - perhaps 
> something is blocking non-admin users from reading their own groups?
>
> * Increasing logging shows that a user that was granted admin rights 
> directly has all the groups in the "Granted Authorities" but non-admin user 
> only has "authenticated" - interestingly enough admin user does NOT have 
> 'authenticated'...
>
> * Don't think it is relevant here, but in the past I recall having to do a 
> special prefix for groups (like '@' I think) - not sure if this is still 
> necessary
>
>
> Versions -- Running this on:
>
> * Jenkins 2.10
> * LDAP Plugin 1.12
> * Role Based Authorization Strategy 2.3.2
>
> Any thoughts or suggestions would be appreciated
>
> Thanks,
>
> -Michael
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Jenkins Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to jenkinsci-use...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/jenkinsci-users/0c1f3dd2-e132-4c08-b8e3-c4b22cb2974c%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/jenkinsci-users/0c1f3dd2-e132-4c08-b8e3-c4b22cb2974c%40googlegroups.com?utm_medium=email_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/828f3027-1124-4e11-861b-eba100a1967e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


LDAP groups and Role Based Authorization no playing nice.

2016-08-15 Thread Michael Lasevich
I am trying to do something I thought I have done many times before, but it 
is not working now - using Roles based Authorization with LDAP 
authentication and specifically LDAP Groups

I believe I have LDAP Authentication setup and working  for both users and 
groups
I believe I have Role based authentication set up.

Granting roles to LDAP users directly - either global or project roles - 
works. I can login with LDAP user and get expected permissions. Granting 
roles to 'authenticated' also seems to work.

However if I grant permissions to LDAP group - it just does not work. 

I am very confused why assigning roles to groups does not work.

Few thoughts and observations: 

* "Assign Roles" UI recognizes LDAP Groups and shows a group icon next to 
them.

* "User status" UI (/user/username URI) shows groups for the use and I even 
ran that LDAP test groovy script that worked as expected. Although...

* "User Status" only shows groups to "admin" user. A regular use with just 
access to run specific jobs does not see their own groups - perhaps 
something is blocking non-admin users from reading their own groups?

* Increasing logging shows that a user that was granted admin rights 
directly has all the groups in the "Granted Authorities" but non-admin user 
only has "authenticated" - interestingly enough admin user does NOT have 
'authenticated'...

* Don't think it is relevant here, but in the past I recall having to do a 
special prefix for groups (like '@' I think) - not sure if this is still 
necessary


Versions -- Running this on:

* Jenkins 2.10
* LDAP Plugin 1.12
* Role Based Authorization Strategy 2.3.2

Any thoughts or suggestions would be appreciated

Thanks,

-Michael



-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/0c1f3dd2-e132-4c08-b8e3-c4b22cb2974c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [workflow-plugin] Using workflow in same SCM as code

2015-08-12 Thread Michael Lasevich
Thanks, that looks awesome, but I am having some trouble with it in 
practice (I think these are all MultiBranch plugin problems, but still)

1 - It seems to be able to create jobs but not delete them - I get an error 
like this if a previously created job is to be deleted: 

hudson.security.AccessDeniedException2: SYSTEM is missing the Job/Delete 
permission

2 - What is the format for Include branches field in Git config? The only 
thing I was able to get to work is either * or master - any attempt at 
multiple specifiers seems to fail and there is no docs for it that I can 
find. I would like to match several patterns. 

3 - it does not seem to work with any branches with a slash (/) in them (it 
creates a branch, but you cannot use it due to broken url)


-M

On Wednesday, August 12, 2015 at 2:00:40 PM UTC-7, Jesse Glick wrote:

 On Tuesday, July 14, 2015 at 1:16:35 AM UTC-4, Michael Lasevich wrote:

 I want to keep my workflow script in same git repo as rest of my code and 
 want to avoid hard-coding URLs or branch names into the workflow.


  Multibranch workflow, in beta. 


-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/b65b69ec-1955-486c-be3c-b829a7d0c9ce%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


SAML plugin vs Favorites View

2015-08-12 Thread Michael Lasevich
Hmm, 

Favorites view used to work just fine, click on the star and the job 
appears in the view. But I have recently started using SAML plugin for 
authentication and suddenly Favorites view is empty.  The stars are there 
and it remembers them from login to login.

Anyone else run into this? Any ideas if this is fixable?

-M

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/9214ef62-ec6c-4f7c-bbb0-ac04145c5ff9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[workflow-plugin] Using workflow in same SCM as code

2015-07-13 Thread Michael Lasevich
I want to keep my workflow script in same git repo as rest of my code and 
want to avoid hard-coding URLs or branch names into the workflow. 

When I use Groovy CPS DSL from SCM it seems to check out into a separate 
workspace, so I do not have access to anything in the git repo. I could 
check out the code in the workflow for the second time, but for whatever 
reason the SCM variables are not available within the workflow (why???)

Is there a good way to do this?

Thanks,

-M

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/c7dc0ed3-8c18-4f0c-a875-080f45cc223a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Jenkins execute part of the job on master?

2013-07-05 Thread Michael Lasevich
Thanks, this actually works in the sense that I can definitely force 
execution on the master. At the very least this allows me to look at the 
code for this to figure out how this is done. I think I have a bigger 
issue though because the workspace files I need may not be on the master 
at that point. Need to think this through a bit more, but it is a great 
starting point. Thanks again!


-M

On 7/3/13 11:51 AM, lom109 wrote:

You can produce this behaviour if you decide to use the Groovy Plugin.

It lets you set up the environment using groovy ON the master... 
which means you can run arbitrary setup logic on the master before you 
copy over any artifacts to the slave.


Which isn't what the OP requested.. but throwing that out there.

On Tue, Jul 2, 2013 at 1:07 PM, Mark Waite markwa...@yahoo.com 
mailto:markwa...@yahoo.com wrote:


As far as I can tell, a job executes on a single node, whether
master or slave.  I'm not aware of any facility that allows a
Jenkins job to move execution steps to a node different than the
node executing the job.

Mark Waite


*From:* Michael Lasevich mlasev...@gmail.com
mailto:mlasev...@gmail.com
*To:* jenkinsci-users@googlegroups.com
mailto:jenkinsci-users@googlegroups.com
*Sent:* Tuesday, July 2, 2013 10:17 AM
*Subject:* Jenkins execute part of the job on master?

Trying to understand master-slave relationship in jenkins node.

Is it possible to execute MOST of the job (build) on a slave
node, but the final step (one or more publishers?) on the
master node?

If I am writing my own plug in, how do I control where the
plugin executes? Or can I?


Thanks,

-M

-- 
You received this message because you are subscribed to the

Google Groups Jenkins Users group.
To unsubscribe from this group and stop receiving emails from
it, send an email to
jenkinsci-users+unsubscr...@googlegroups.com
mailto:jenkinsci-users%2bunsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google

Groups Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it,
send an email to jenkinsci-users+unsubscr...@googlegroups.com
mailto:jenkinsci-users%2bunsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



--
You received this message because you are subscribed to a topic in the 
Google Groups Jenkins Users group.
To unsubscribe from this topic, visit 
https://groups.google.com/d/topic/jenkinsci-users/GidC6--i5_E/unsubscribe.
To unsubscribe from this group and all its topics, send an email to 
jenkinsci-users+unsubscr...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.




--
You received this message because you are subscribed to the Google Groups Jenkins 
Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




Jenkins execute part of the job on master?

2013-07-02 Thread Michael Lasevich
Trying to understand master-slave relationship in jenkins node. 

Is it possible to execute MOST of the job (build) on a slave node, but the 
final step (one or more publishers?) on the master node?

If I am writing my own plug in, how do I control where the plugin executes? 
Or can I?


Thanks,

-M

-- 
You received this message because you are subscribed to the Google Groups 
Jenkins Users group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.