Trying to understand master-slave relationship in jenkins node.
Is it possible to execute MOST of the job (build) on a slave node, but the
final step (one or more publishers?) on the master node?
If I am writing my own plug in, how do I control where the plugin executes?
Or can I?
Thanks,
:* Michael Lasevich mlasev...@gmail.com
mailto:mlasev...@gmail.com
*To:* jenkinsci-users@googlegroups.com
mailto:jenkinsci-users@googlegroups.com
*Sent:* Tuesday, July 2, 2013 10:17 AM
*Subject:* Jenkins execute part of the job on master?
Trying to understand
, Michael Lasevich wrote:
I want to keep my workflow script in same git repo as rest of my code and
want to avoid hard-coding URLs or branch names into the workflow.
Multibranch workflow, in beta.
--
You received this message because you are subscribed to the Google Groups
Jenkins Users group
Hmm,
Favorites view used to work just fine, click on the star and the job
appears in the view. But I have recently started using SAML plugin for
authentication and suddenly Favorites view is empty. The stars are there
and it remembers them from login to login.
Anyone else run into this? Any
I want to keep my workflow script in same git repo as rest of my code and
want to avoid hard-coding URLs or branch names into the workflow.
When I use Groovy CPS DSL from SCM it seems to check out into a separate
workspace, so I do not have access to anything in the git repo. I could
check
I am trying to do something I thought I have done many times before, but it
is not working now - using Roles based Authorization with LDAP
authentication and specifically LDAP Groups
I believe I have LDAP Authentication setup and working for both users and
groups
I believe I have Role based
combined with Folder plugin on
> Enterprise Jenkins are made to work with LDAP group.
>
> -Indra
>
> From: <jenkins...@googlegroups.com > on behalf of Michael
> Lasevich <mlas...@gmail.com >
> Reply-To: "jenkins...@googlegroups.com " <
> jenkins.
can read
the groups - but any other kind of user fails. This of course makes LDAP
Groups completely useless as only admins can see them. I filed a ticket
with LDAP Plugin team: https://issues.jenkins-ci.org/browse/JENKINS-37858
-M
On Monday, August 15, 2016 at 3:59:56 PM UTC-5, Michael Lasev
I am not sure how much more control you want, but all you need is to
generate two JSON files and host them on a web server along with the
plugins - should be trivial to do. But why duplicate the work if it is
already done for you
-M
On Thursday, September 1, 2016 at 1:37:36 PM UTC-5,
Worth mentioning that the MB "build" is not a build of the code, but
indexing of the repo for branch status. So if you schedule it to build
every 5 minutes, it will re-index the repo every 5 minutes - but only build
the branches that have changes in them and only when there is a change (or
Workspaces are meant to be ephemeral and can be deleted/removed at any
time. Counting on same workspace being there for every build is a bad idea
for many reasons - not the least of which is that the point of using a
build system like Jenkins is to have a reliable and repeatable build - if
We have MB pipeline job and we do not wish to delete old jobs even when
branches are deleted (for historical use) - however sometimes someone
creates a wrong branch and deletes it, and we are left with an orphaned
job. If I enable cleanup, it will delete ALL jobs without branches - not
what we
I cheat using try/catch:
try{ echo "MyParam is:"+ myParam} catch(ex) { myParam = "default" }
Its ugly, but works. Obviously you can alter this concept to whatever works
for you :-)
-M
On Thursday, September 29, 2016 at 1:37:05 AM UTC-7, Sverre Moe wrote:
>
> I can no longer check if a build
I am making an assumption that "Modern SCM" is some new Jenkins SCM API not
yet supported by most common SCMs - so I am just ignoring it and using
Legacv SCM
As for ${library.name.version} - you use it if you specify branch/tag in
GIT or in URL for SVN
Basically when you load the library with
s something finicky about the root directory of the SVN
> working copy. But I will give *Load implicitly* a shot too.
>
> Thanks
>
> On Friday, September 16, 2016 at 3:31:07 PM UTC-7, Michael Lasevich wrote:
>>
>> Is it possible that this is some odd issue with '-' symbols
gests it yields
> c:/Jenkins/workspace/Dev-Snippets/x-pipeline-1/libs/pipelineGlobalHelpers,
> instead of .../x-pipeline-1@libs/pipelineGlobalHelpers, but maybe I am
> misunderstanding *FilePath#child* or *#getRootDir*.
>
> There are still a few more experiments to try.
>
>
> On Frida
Is it possible that this is some odd issue with '-' symbols in the library
name? I would try a simpler name. Also try to implicit load to simplify the
load...
-M
On Friday, September 16, 2016 at 1:16:05 PM UTC-7, Brian Ray wrote:
>
> Evidently I cannot drive this post widget very well. The
You got it backwards, master connects to Slave using standard SSH
Pub/Private key auth. So, since the master is connecting to slave, you are
not putting Slave's private keys on Master, you are putting Master's public
key on Slave. While this looks like same thing physically, logically it
Out of curiosity, have you gotten the groups to work in this config?
I have this same setup working, but I can only see groups IFF the user
already has admin rights (which is very backwards and useless, as groups
are mostly meaningless if you are already admin). I opened a bug with LDAP
Just noticed: it appears that there is an issue between favorites view and
MultiBranch pipelines -
If I mark the MB parent job as favorite, it properly shows up in Favorites
view, however if I mark one of the branches, it does not. It does, however,
show up in list of favorites in the user
We have a setup where git server (Stash) notifies Jenkins server via
WebHook of commits in certain branches and a MB job executes builds on
branches matching the change
This works fine, but we would like to ignore changes not affecting the
build - so only react to changes in certain
Have you considered creating a temp subdirectory with identity of your OS
(can be generated automatically) and then moving your artifacts to that
directory and archiving the directory? You end up with artifacts with same
names in different directories - easy to browse and link to, and does what
Is it possible to add parameters to a multibranch pipeline job?
I am happy with the job running with some sorts of default values when
running automatically, but it would be nice to be able to have Jenkinsfile
(or even manually) define parameters that could be optionally provided
when
n a job and
the job will not run until you fix the problem. I ended up having to delete
the entire MB job to fix it.
-M
On Friday, August 26, 2016 at 12:52:49 PM UTC-7, Michael Lasevich wrote:
>
> Is it possible to add parameters to a multibranch pipeline job?
>
> I am happy with the job ru
This is probably more of a GitLab than Jenkins discussion but I think
GitLab can send WebHooks to anything, so it should be able to trigger
Jenkins jobs with generic WebHook and I believe there is a way to have
Jenkins accept WebHooks. That said, this only works when there is a direct
network
Can anyone who understands Groovy/CPS look at this and see if this is even
resolvable/fixable?
Ok, so after much hair loss and much unpredictable behavior - I have
discovered an ugly bug (feature?) that Groovy getters/setters that work
fine under regular Groovy, fail miserably under
There are many approaches to this problem, and I am not sure what your
setup or requirements are, but here is some information that may be helpful.
* It is important to understand that any jenkins build has two ids - the
actual immutable build_id - which is always consecutive for a job and
Well, apparently it is a known issue for about a year, with little activity
:-(
https://issues.jenkins-ci.org/browse/JENKINS-31484
-M
On Wednesday, September 28, 2016 at 6:00:40 PM UTC-7, Michael Lasevich
wrote:
>
> Can anyone who understands Groovy/CPS look at this and see if this i
t; Martina
>
> On Wednesday, October 26, 2016 at 12:24:48 AM UTC-6, Michael Lasevich
> wrote:
>>
>> So, what is the proper way to initialize the fields in the "Global
>> Variables" found in the /vars dir in library code?
>>
>> I know it is supposed t
t without the class stuff in smaller
> snippets.
>
> Martina
>
>
>
> On Wednesday, October 26, 2016 at 8:54:59 AM UTC-6, Michael Lasevich wrote:
>>
>> No dice,
>>
>> groovy.lang.MissingPropertyException: No such property: myList for class:
>> m
I am not sure the stages you are talking about are same as what Jenkins
Pipelines calls stages.
Jenkins, at its core, is a job server. In Pipelines, a stage is a segment
of a job. Stages of a build job would be something like "Build Binaries" or
"Upload Build Artifacts" - something that is
You can/should keep it in Git but you can use the "Replay" function to test
your modifications before committing them (modify in IDE, cut-n-paste to
Replay window to test, then commit when ready).
I am not sure why you would want to keep it out of git log, that is exactly
what git log is there
So, what is the proper way to initialize the fields in the "Global
Variables" found in the /vars dir in library code?
I know it is supposed to be a singleton instantiated on first call, and I
know I can SET new fields by just setting them, but what if I want them to
have default value when
I think pretty much every browser will do XML+XSL conversion without any
plugin, although if you want a "shared" XSL file, you may have to add a
stylesheet tag to point to it inside your XML
So all you do is archive your XML with stylesheet tag and a shared location
for your XSL file and you
OK, so this is more than a little creepy
-M
On Tuesday, October 25, 2016 at 5:14:23 PM UTC-7, Jorge Hernandez wrote:
>
> Does anyone have a way to manually remove the MB Pipeline Jobs does not
> involve restarting the server?
>
> Currently, the only way I know to do is delete the directory
Is there a way to enable quiet periods via MB Pipeline and properties
command?
The option is not there in the pipeline syntax codegen and I am wondering
if there is a reason for that...
Thanks,
-M
--
You received this message because you are subscribed to the Google Groups
"Jenkins Users"
selector for that) - and
> now you can deploy builds from feature branches for testing BEFORE they are
> merged into develop
>
> If there are any examples / code snippets on how to do this, will greatly
> help me.
>
> On Wed, Oct 26, 2016 at 8:56 PM, Michael Lasevich <mlas...
You are not doing anything wrong. CPS is just broken in this scenario. You
cannot have both a field and a getter/setter at the same time with matching
names - it gets confused and goes into infinite loop. Change 'this.foo' to
'this._foo' and it will start working (and you can still use
estyle job at the new pipeline build job, to retrieve artifacts. So what
> am I missing?
>
> Thanks,
>
> Graham
>
> On Wednesday, October 26, 2016 at 4:26:17 PM UTC+1, Michael Lasevich wrote:
>>
>> I am not sure the stages you are talking about are same as what Jenkins
nfortunately I haven't not seen that feature yet with
> the Pipelines.
>
> Cheers
>
> On Monday, 7 November 2016 20:53:35 UTC, Michael Lasevich wrote:
>>
>> Ahh, "Job DSL", I remember that. It was a good thing when it was the
>> only game in town,
The Functionality in Scriptler and Global Libraries can and does often
intersect. I would like to avoid repeating same code in multiple places and
I was wondering if there is any way to call one from another?
The specific use-case I am looking at is using active parameters in a job
that
There is no reason why you cannot automate the local running of the Jenkins
tests. See how plugin development works - they have setup to start a local
Jenkins instance with plugin installed, and it should be relatively simple
to automate that to run test jobs and check for output.
That said,
Not familiar with "Jobs SCM" and unclear as to what you are trying to do -
or how Redis or Docker fits it here. If you have some idea, you are welcome
to write a plugin and make it work however you want, which is why the
plugin system exists - and answers your question as to why this is done
Ahh, "Job DSL", I remember that. It was a good thing when it was the only
game in town, but (in my opinion) Pipelines pretty much made it obsolete.
Of course it is a matter of opinion, but if you are finding Jobs DSL too
complicated, Pipelines may be just right for you - it removes a lot of
This error does not appear to be coming from Jenkins (no jenkings classes
in the stack trace) - So I would examine the job that is being executed and
check if the maven options you set are actually taking effect. I do not see
maven classes in the stack trace either, so this may be some tool
Does anyone have a way of manually deleting the MB Pipeline Jobs that does
not involve restarting the server?
Currently the only way I know to get it done is to delete the directory on
the master filesystem and restart the server - which is obviously less than
ideal on a busy server.
I
-server you are serving
Jenkins from.
HTH,
-M
On Tuesday, October 25, 2016 at 8:10:26 PM UTC-7, Michael Lasevich wrote:
>
> I think pretty much every browser will do XML+XSL conversion without any
> plugin, although if you want a "shared" XSL file, you may have to add a
> s
On Friday, October 28, 2016 at 9:36:49 PM UTC-7, John Calsbeek wrote:
>
>
> Shared storage is a potential option, yes, but the tasks in question are
> currently not very fault-tolerant when it comes to network hitches.
>
Well, it would pay to make them more fault-tolerant :-) But even if you
Is there are way to reduce the need for tasks to run on same slave? I
suspect the issue is having data from the last run - if that is the case,
is there any shared storage solution that may reduce the time difference?
If you can reduce the need for binding tasks to specific nodes, you bypass
I would like to set "description" field in automatically generated branches
in Multibranch Pipeline to add some information like links to latest
artifacts, links to latest generated documentation, etc. While I would
settle for one standard default description for all the branches, ideally
it
50 matches
Mail list logo