On Thu, Aug 13, 2009 at 10:44 PM, P S<[email protected]> wrote: > On Thu, Aug 13, 2009 at 4:03 PM, Dean, David (I/S)<[email protected]> > wrote: >> Thank you, the problem however is exactly what you allude to, what size do >> they REALLY need to be. IBM is comparing WebSphere RAM needs between AIX, >> Windows, and zLinux. We have historical data that help us compare AIX needs >> to Windows needs, but no one is able to help us determine how that compares >> in zLinux land. Subsequently when an app comes along that specifies 8G in >> AIX or Windows people are making the leap to say that we will need 8G on >> zLinux...which is ludicrous...it is an apples to oranges comparison. Are >> there any studies, metrics, stats, whatever that anyone has that could help? >> Also, for this project I am referring to a heavy WebSphere / Java >> environment. > > I'm sure Velocity can help you here. The fact that Barton or Rob > haven't weighed in suggests to me that they're on a (rare) > vacation.Suggest you call (650) 964-8867 and find out.
I can assure you that we certainly don't spend our vacation together, mostly not even at the same time ;-) Fact is that I was rather busy polishing my presentation for SHARE (9211 - Linux on z/VM - Memory Management). And when one of my local hard disks went AWOL there was little time left to follow the threads on the mailing list. When I did catch up with the list, I decided the quoted claim was so much bogus that I did not even know where to start responding to it. If things were that easy, I would not have a job and I would not need to talk about this at SHARE. Considering that I have a hard time getting some of this in a 50 minute presentation, I would only confuse you if I tried in this post. One of things missing in that magic formula is utilization. With CPU it is similar, but probably easier to understand. When you have an application that runs at 100% all day, you can't just put 20 of them on a 10-way machine because it is just virtual. When an application is used only 5% of the time, you would hope it only needs the resources 5% of the time. When applications or servers don't behave like that, we say they don't scale. They don't share the resources. Like a reserved parking spot even when you're only at the office one day per week. As we found out, much of the middleware does not behave well in this aspect. We're working with people in IBM on this, and progress is made in some areas. When virtual machines do not drop from queue, you can't really overcommit memory and still expect reasonable response times. The hope is that the estimate of 6 GB is way off and only based on the fact that the machines happened to have that much. That number would be really worst case and probably does not help you with your TCO analysis. As Marcy points out with Java you use the JVM heap size for your sizing. But when those were set ad-hoc by "one size fits all" then that is little help. One of the gotchas here is that initially it will appear to work because the excess memory is not used yet by Linux. But over time Linux will find ways to use that memory and performance gets very bad. Using the performance history data, you can do the sizing the other way around. Looking at the trend you establish a base line requirement, and then use that to size your JVM heap. Rob -- Rob van der Heij Velocity Software http://www.velocitysoftware.com/
