A job/pipeline is using custom workspace. The job is set-up in such a way 
that every build creates there own workspace inside the main/default 
workspace. So, if the actual workspace is 
*"D:\jenkins\workspace\<job_name>"*, each run of the job/pipeline is 
creating there own workspace like 



*"D:\jenkins\workspace\<job_name>\1""D:\jenkins\workspace\<job_name>\2""D:\jenkins\workspace\<job_name>\3"*
.
.
.
This lock down of workspace is implemented as per a requirement. 

If the job is successful, these workspaces are deleted in a stage using the 
*deletedir(). 
*But if the job fails, i leave them for postmortem.

Is there any standard/recommended way to clean up the entire workspace in a 
scheduled way (like once a month)? I have a pool of slaves and the 
scheduled job should clean in all slaves. 

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/e4b51b51-1e4c-4ee0-9795-abd99efc3dd6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to