Re: ci-builds all 3.6TB disk is full!

2022-04-21 Thread Jonathan Gallimore
Looks like I missed Richard's email yesterday and he's already taken care
of it. Thanks Richard, and sorry for the noise.

Jon

On Thu, Apr 21, 2022 at 4:30 PM Jonathan Gallimore <
jonathan.gallim...@gmail.com> wrote:

> Looks like we're top of the list, and not in a good way... :/
>
> I'm happy to help, but not sure where to start. Does anyone have any
> pointers?
>
> Jon
>
> -- Forwarded message -
> From: Gavin McDonald 
> Date: Wed, Apr 20, 2022 at 8:27 AM
> Subject: ci-builds all 3.6TB disk is full!
> To: builds 
>
>
> Hi All,
>
> Seems we need to do another cull of projects storing way too much data.
>
> Below are everyone above 1GB. Just FYI, 1GB is fine, likely 50GB is fine,
> but above
> that, its just too much. I will be removing 1TB of data from wherever I can
> get it.
>
> Please, look after your jobs, and your fellow projects by limiting what you
> keep.
>
> 1.6TTomee
> 451GKafka
> 303Gjames
> 176Gcarbondata
> 129GJackrabbit
> 71G Brooklyn
> 64G Sling
> 64G Netbeans
> 60G Ranger
> 38G AsterixDB
> 33G OODT
> 29G Tika
> 27G Syncope
> 24G Atlas
> 20G IoTDB
> 18G CXF
> 16G POI
> 11G Solr
> 11G Mesos
> 8.7GRoyale
> 7.8GLucene
> 7.6GMyFaces
> 7.6GDirectory
> 6.4GOpenJPA
> 6.0GManifoldCF
> 5.9GActiveMQ
> 5.7GLogging
> 5.6GArchiva
> 5.5GUIMA
> 5.3Gctakes
> 4.7GHeron
> 4.6GJena
> 4.0GOpenOffice
> 3.8GCloudstack
> 3.4GShiro
> 2.5GQpid
> 2.1GJSPWiki
> 2.1GJMeter
> 2.0GJClouds
> 1.8GSantuario
> 1.8GOpenMeetings
> 1.8GCamel
> 1.7GKaraf
> 1.7GHttpComponents
> 1.7GAnt
> 1.5GTapestry
> 1.5GCommons
> 1.3GDeltaSpike
> 1.2GRya
> 1.2GAries
> 1.2GAccumulo
> 1.1GPDFBox
>
> --
>
> *Gavin McDonald*
> Systems Administrator
> ASF Infrastructure Team
>


Re: ci-builds all 3.6TB disk is full!

2022-04-20 Thread Richard Zowalla
Thanks for the listing. Retention policies are now in place for our
daily deploy jobs, so we shouldn't produce this huge amount of data
over time now.

If the retention kicks in, we should be fine.

Gruß
Richard


Am Mittwoch, dem 20.04.2022 um 09:41 +0200 schrieb Gavin McDonald:
> Thanks Richard,
> 
> On Wed, Apr 20, 2022 at 9:37 AM Richard Zowalla 
> wrote:
> > Hi,
> > 
> > seems we are the "top" consumers with 1,6TB disk usage on the CI
> > infrastructure.
> > 
> > I looked at some of our jobs and found, that there is no retention
> > policy in place (for some of them). I added a policy similar to
> > what we
> > had in the past for newly created jobs. Looks like the retention
> > policy
> > is not copied then cloning jobs.
> > 
> > In addition, I asked Gavin, if he can provide a "du" listing for
> > our
> > jobs, so we can better dig into this issue.
> 
> Here is your listing:
> 
> 834G master-deploy
> 
> of which:
> 
> 445G org.apache.tomee$apache-tomee
> 111G org.apache.tomee$tomee-embedded
> 60G org.apache.tomee$openejb-standalone
> 44G org.apache.tomee$tomee-plume-webapp
> 39G org.apache.tomee$tomee-plus-webapp
> 36G org.apache.tomee$tomee-microprofile-webapp
> 26G org.apache.tomee$tomee-webapp
> 25G org.apache.tomee$openejb-lite
> 5.4G org.apache.tomee$tomee-webaccess
> 5.1G org.apache.tomee$taglibs-shade
> 4.4G org.apache.tomee$openejb-provisionning
> 4.0G org.apache.tomee$openejb-itests-app
> 3.4G org.apache.tomee$openejb-ssh
> 3.3G org.apache.tomee$arquillian-tomee-moviefun-example
> 3.1G org.apache.tomee$cxf-shade
> 2.3G org.apache.tomee$openejb-core
> 
> 597G jakarta-deploy
> 
> of which:
> 
> 354G org.apache.tomee$apache-tomee
> 73G org.apache.tomee$tomee-plume-webapp
> 66G org.apache.tomee$tomee-plus-webapp
> 60G org.apache.tomee$tomee-microprofile-webapp
> 42G org.apache.tomee$tomee-webprofile-webapp
> 4.3G org.apache.tomee$jakartaee-api
> 65M org.apache.tomee$tomee-project
> 44M org.apache.tomee$tomee
> 38M org.apache.tomee$transform
> 20M org.apache.tomee.jakarta$apache-tomee
> 7.2M org.apache.tomee.bom$jaxb-runtime
> 6.5M org.apache.tomee.bom$boms
> 
> 63G tomee-8.x-deploy
> 25G jakarta-wip-9.x-deploy
> 23G master-build-full
> 7.0G site-publish
> 4.2G tomee-8.x-sanity-checks
> 3.1G tomee-7.0.x
> 2.5G master-sanity-checks
> 2.1G pull-request
> 2.0G tomee-8.x-build-full
> 2.0G TOMEE-3872
> 1.7G master-pull-request
> 1.1G tomee-8.x-owasp-check
> 1.1G master-owasp-check
> 1.1G master-build-quick
> 945M tomee-8.x-build-quick
> 35M tomee-jakartaee-api-master
> 27M tomee-patch-plugin-deploy
> 428K clean-repo
> 256K tomee-jenkins-pipelines
> 
> > Gruß
> > Richard
> > 
> >  Ursprüngliche Nachricht 
> > Von: Gavin McDonald 
> > Antwort an: bui...@apache.org, gmcdon...@apache.org
> > An: builds 
> > Betreff: ci-builds all 3.6TB disk is full!
> > Datum: Wed, 20 Apr 2022 09:27:28 +0200
> > 
> > > Hi All,
> > > 
> > > Seems we need to do another cull of projects storing way too much
> > > data.
> > > 
> > > Below are everyone above 1GB. Just FYI, 1GB is fine, likely 50GB
> > is
> > > fine,
> > > but above
> > > that, its just too much. I will be removing 1TB of data from
> > wherever
> > > I can
> > > get it.
> > > 
> > > Please, look after your jobs, and your fellow projects by
> > limiting
> > > what you
> > > keep.
> > > 
> > > 1.6TTomee
> > > 451GKafka
> > > 303Gjames
> > > 176Gcarbondata
> > > 129GJackrabbit
> > > 71G Brooklyn
> > > 64G Sling
> > > 64G Netbeans
> > > 60G Ranger
> > > 38G AsterixDB
> > > 33G OODT
> > > 29G Tika
> > > 27G Syncope
> > > 24G Atlas
> > > 20G IoTDB
> > > 18G CXF
> > > 16G POI
> > > 11G Solr
> > > 11G Mesos
> > > 8.7GRoyale
> > > 7.8GLucene
> > > 7.6GMyFaces
> > > 7.6GDirectory
> > > 6.4GOpenJPA
> > > 6.0GManifoldCF
> > > 5.9GActiveMQ
> > > 5.7GLogging
> > > 5.6GArchiva
> > > 5.5GUIMA
> > > 5.3Gctakes
> > > 4.7GHeron
> > > 4.6GJena
> > > 4.0GOpenOffice
> > > 3.8GCloudstack
> > > 3.4GShiro
> > > 2.5GQpid
> > > 2.1GJSPWiki
> > > 2.1GJMeter
> > > 2.0GJClouds
> > > 1.8GSantuario
> > > 1.8GOpenMeetings
> > > 1.8GCamel
> > > 1.7GKaraf
> > > 1.7GHttpComponents
> > > 1.7GAnt
> > > 1.5GTapestry
> > > 1.5GCommons
> > > 1.3GDeltaSpike
> > > 1.2GRya
> > > 1.2GAries
> > > 1.2GAccumulo
> > > 1.1GPDFBox
> > > 
> > > -- 
> > > 
> > > *Gavin McDonald*
> > > Systems Administrator
> > > ASF Infrastructure Team
> > 
> 
> 


signature.asc
Description: This is a digitally signed message part


Re: ci-builds all 3.6TB disk is full!

2022-04-20 Thread Gavin McDonald
Thanks Richard,

On Wed, Apr 20, 2022 at 9:37 AM Richard Zowalla  wrote:

> Hi,
>
> seems we are the "top" consumers with 1,6TB disk usage on the CI
> infrastructure.
>
> I looked at some of our jobs and found, that there is no retention
> policy in place (for some of them). I added a policy similar to what we
> had in the past for newly created jobs. Looks like the retention policy
> is not copied then cloning jobs.
>
> In addition, I asked Gavin, if he can provide a "du" listing for our
> jobs, so we can better dig into this issue.
>

Here is your listing:

834G master-deploy

of which:

445G org.apache.tomee$apache-tomee
111G org.apache.tomee$tomee-embedded
60G org.apache.tomee$openejb-standalone
44G org.apache.tomee$tomee-plume-webapp
39G org.apache.tomee$tomee-plus-webapp
36G org.apache.tomee$tomee-microprofile-webapp
26G org.apache.tomee$tomee-webapp
25G org.apache.tomee$openejb-lite
5.4G org.apache.tomee$tomee-webaccess
5.1G org.apache.tomee$taglibs-shade
4.4G org.apache.tomee$openejb-provisionning
4.0G org.apache.tomee$openejb-itests-app
3.4G org.apache.tomee$openejb-ssh
3.3G org.apache.tomee$arquillian-tomee-moviefun-example
3.1G org.apache.tomee$cxf-shade
2.3G org.apache.tomee$openejb-core

597G jakarta-deploy

of which:

354G org.apache.tomee$apache-tomee
73G org.apache.tomee$tomee-plume-webapp
66G org.apache.tomee$tomee-plus-webapp
60G org.apache.tomee$tomee-microprofile-webapp
42G org.apache.tomee$tomee-webprofile-webapp
4.3G org.apache.tomee$jakartaee-api
65M org.apache.tomee$tomee-project
44M org.apache.tomee$tomee
38M org.apache.tomee$transform
20M org.apache.tomee.jakarta$apache-tomee
7.2M org.apache.tomee.bom$jaxb-runtime
6.5M org.apache.tomee.bom$boms

63G tomee-8.x-deploy
25G jakarta-wip-9.x-deploy
23G master-build-full
7.0G site-publish
4.2G tomee-8.x-sanity-checks
3.1G tomee-7.0.x
2.5G master-sanity-checks
2.1G pull-request
2.0G tomee-8.x-build-full
2.0G TOMEE-3872
1.7G master-pull-request
1.1G tomee-8.x-owasp-check
1.1G master-owasp-check
1.1G master-build-quick
945M tomee-8.x-build-quick
35M tomee-jakartaee-api-master
27M tomee-patch-plugin-deploy
428K clean-repo
256K tomee-jenkins-pipelines


> Gruß
> Richard
>
>  Ursprüngliche Nachricht 
> Von: Gavin McDonald 
> Antwort an: bui...@apache.org, gmcdon...@apache.org
> An: builds 
> Betreff: ci-builds all 3.6TB disk is full!
> Datum: Wed, 20 Apr 2022 09:27:28 +0200
>
> > Hi All,
> >
> > Seems we need to do another cull of projects storing way too much
> > data.
> >
> > Below are everyone above 1GB. Just FYI, 1GB is fine, likely 50GB is
> > fine,
> > but above
> > that, its just too much. I will be removing 1TB of data from wherever
> > I can
> > get it.
> >
> > Please, look after your jobs, and your fellow projects by limiting
> > what you
> > keep.
> >
> > 1.6TTomee
> > 451GKafka
> > 303Gjames
> > 176Gcarbondata
> > 129GJackrabbit
> > 71G Brooklyn
> > 64G Sling
> > 64G Netbeans
> > 60G Ranger
> > 38G AsterixDB
> > 33G OODT
> > 29G Tika
> > 27G Syncope
> > 24G Atlas
> > 20G IoTDB
> > 18G CXF
> > 16G POI
> > 11G Solr
> > 11G Mesos
> > 8.7GRoyale
> > 7.8GLucene
> > 7.6GMyFaces
> > 7.6GDirectory
> > 6.4GOpenJPA
> > 6.0GManifoldCF
> > 5.9GActiveMQ
> > 5.7GLogging
> > 5.6GArchiva
> > 5.5GUIMA
> > 5.3Gctakes
> > 4.7GHeron
> > 4.6GJena
> > 4.0GOpenOffice
> > 3.8GCloudstack
> > 3.4GShiro
> > 2.5GQpid
> > 2.1GJSPWiki
> > 2.1GJMeter
> > 2.0GJClouds
> > 1.8GSantuario
> > 1.8GOpenMeetings
> > 1.8GCamel
> > 1.7GKaraf
> > 1.7GHttpComponents
> > 1.7GAnt
> > 1.5GTapestry
> > 1.5GCommons
> > 1.3GDeltaSpike
> > 1.2GRya
> > 1.2GAries
> > 1.2GAccumulo
> > 1.1GPDFBox
> >
> > --
> >
> > *Gavin McDonald*
> > Systems Administrator
> > ASF Infrastructure Team
>
>

-- 

*Gavin McDonald*
Systems Administrator
ASF Infrastructure Team