This is a df -h from our production app role: Filesystem Size Used Avail Use% Mounted on /dev/sda1 10G 2.8G 6.8G 29% / varrun 851M 56K 851M 1% /var/run varlock 851M 0 851M 0% /var/lock udev 851M 16K 851M 1% /dev devshm 851M 0 851M 0% /dev/shm /dev/sda2 147G 188M 140G 1% /mnt
so 2.8G is too big? and that caused this issue? What do we need to keep it under to get it down to a 10 minute sync time? On Jun 17, 7:46 am, Donovan Bray <[email protected]> wrote: > What are the size recomendations? We just have our rails apps and > applications installed via apt get. There's no static content that we > can leverage with s3 that isn't already on s3. > > If the problem is the git checkouts then I guess we should investigate > an ebs array > > On Jun 17, 2009, at 12:34 AM, Alex Kovalyov <[email protected]> > wrote: > > > > > The instance is too big and this causes your recent issues with sync. > > You need to consider EBS or S3 as a storage. > > > On 16 июн, 22:22, Donovan <[email protected]> wrote: > >> From initiation of the synchronize all to the new app server being up > >> and ready to go is taking the majority of an hour to complete. > > >> Jun 15, 2009 23:05:21 > >> Info > >> i-9b3f75f2/trap-rebundle.sh > >> Received rebundle trap from Scalr. (New role: ProductionApp3). > >> Jun 15, 2009 23:39:54 > >> Warning > >> PollerProcess > >> The instance ('ami-037c9a6a') 'i-9b3f75f2' will be terminated after > >> instance 'i-f185da98' will boot up. > > >> Jun 15, 2009 23:52:22 > >> Info > >> i-264b2d4f/trap-hostup.sh > >> 10.252.53.203 UP. Scalr notified me that 10.252.53.203 of role app > >> (Custom role: ProductionApp3) is up. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "scalr-discuss" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/scalr-discuss?hl=en -~----------~----~----~----~------~----~------~--~---
