No, no one has written but you (Thank you!). I could write them to a bigger drive, but are they supposed to expand indefinitely, or what are they? Perhaps the reason they've grown so big is because I have started and restarted many topologies many times without properly shutting them down (kill -9) and this has left lots of cruft in /tmp. If these temp files won't grow indefinitely, then I can place them on a 40GB drive and that should suffice, no? I'm afraid that I was getting some kind of weird failures because /tmp was getting filled up.
What would you suggest I do? I'm running on a c3.xlarge in AWS. The OS is installed on an 8 gb drive, but there is a 40gb ephemeral disk attached to it as well, that should work, no? Best, Bryan On Tue, Mar 3, 2015 at 7:22 PM, clay teahouse <[email protected]> wrote: > Hello Bryan > Have you gotten any feedback? You can have the logs generated in a > different directory by setting -Djava.io.tmpdir on the command line (if > your issue is with /tmp getting filled up), but I'd like to know how to > manage these directories regardless of the location. > > Clay > > On Tue, Mar 3, 2015 at 3:40 AM, Bryan Hernandez <[email protected]> > wrote: > >> Greetings Storm Users, >> >> Does anyone know how to handle the large volume of files written to /tmp >> in running a storm topology. My topology is writing GBs worth of data to >> /tmp and it's filling up the drive. >> >> drwxrwxr-x 3 ubuntu ubuntu 4.0K Mar 3 09:33 >> 1484627e-cce2-4055-8b71-04b681b928ad/ >> drwxrwxr-x 3 ubuntu ubuntu 4.0K Mar 3 09:33 >> 50f7a5a4-853f-46b9-8043-603cef62e26f/ >> drwxrwxr-x 3 ubuntu ubuntu 4.0K Mar 3 09:34 >> 529982be-aaed-4252-bcd7-f14f8b91dca2/ >> drwxrwxr-x 4 ubuntu ubuntu 4.0K Mar 3 09:33 >> 66912844-fd34-4160-8d24-70712ff59158/ >> drwxrwxr-x 3 ubuntu ubuntu 4.0K Mar 3 09:33 >> 8268616c-3a4f-4dd3-bee2-83930fd5335b/ >> >> >> My topology is supposed to be running perpetually (in local cluster >> mode), so I need a strategy that cleans up what is not needed such that it >> doesn't affect what is running. >> >> Any suggestions are greatly appreciated. >> >> Best, >> Bryan >> > >
