Hi,

seems we are the "top" consumers with 1,6TB disk usage on the CI
infrastructure.

I looked at some of our jobs and found, that there is no retention
policy in place (for some of them). I added a policy similar to what we
had in the past for newly created jobs. Looks like the retention policy
is not copied then cloning jobs.

In addition, I asked Gavin, if he can provide a "du" listing for our
jobs, so we can better dig into this issue.

Gruß
Richard

-------- Ursprüngliche Nachricht --------
Von: Gavin McDonald <[email protected]>
Antwort an: [email protected], [email protected]
An: builds <[email protected]>
Betreff: ci-builds all 3.6TB disk is full!
Datum: Wed, 20 Apr 2022 09:27:28 +0200

> Hi All,
> 
> Seems we need to do another cull of projects storing way too much
> data.
> 
> Below are everyone above 1GB. Just FYI, 1GB is fine, likely 50GB is
> fine,
> but above
> that, its just too much. I will be removing 1TB of data from wherever
> I can
> get it.
> 
> Please, look after your jobs, and your fellow projects by limiting
> what you
> keep.
> 
> 1.6T    Tomee
> 451G    Kafka
> 303G    james
> 176G    carbondata
> 129G    Jackrabbit
> 71G     Brooklyn
> 64G     Sling
> 64G     Netbeans
> 60G     Ranger
> 38G     AsterixDB
> 33G     OODT
> 29G     Tika
> 27G     Syncope
> 24G     Atlas
> 20G     IoTDB
> 18G     CXF
> 16G     POI
> 11G     Solr
> 11G     Mesos
> 8.7G    Royale
> 7.8G    Lucene
> 7.6G    MyFaces
> 7.6G    Directory
> 6.4G    OpenJPA
> 6.0G    ManifoldCF
> 5.9G    ActiveMQ
> 5.7G    Logging
> 5.6G    Archiva
> 5.5G    UIMA
> 5.3G    ctakes
> 4.7G    Heron
> 4.6G    Jena
> 4.0G    OpenOffice
> 3.8G    Cloudstack
> 3.4G    Shiro
> 2.5G    Qpid
> 2.1G    JSPWiki
> 2.1G    JMeter
> 2.0G    JClouds
> 1.8G    Santuario
> 1.8G    OpenMeetings
> 1.8G    Camel
> 1.7G    Karaf
> 1.7G    HttpComponents
> 1.7G    Ant
> 1.5G    Tapestry
> 1.5G    Commons
> 1.3G    DeltaSpike
> 1.2G    Rya
> 1.2G    Aries
> 1.2G    Accumulo
> 1.1G    PDFBox
> 
> -- 
> 
> *Gavin McDonald*
> Systems Administrator
> ASF Infrastructure Team

Reply via email to