Hi,
http://www.linuxtopia.org/online_books/rhel6/rhel_6_lvm_admin/rhel_6_lvm_stripe_extend.html
My thoughts:
· your stripecount also defines the count of disks you need for proper
extending.
· performance wont be increased much, if you define more stripes than
I/O channels
My 2 cents:
Take 3-4 IFLs and put this Oracle on it. Enough for 98% of the coming
load I think...
Take 2-3 OpteronBoxes for the Java in front of it.
AND! Put some other project on the zBox.
Never make the mistake and size (especially) a z-machine for the peaks
you need. You also shouldn't take
Hello,
We have a Test SAP Appl Srv. It should go in prod soon.
REAL Hardware at moment is 2 IFL, enough RAM. later 6 IFLs, more RAM.
System is up 120day, suddenly cpu-consuming starts 2 days ago. System lags like hell.
I found in the internet things like USB, WLAN driver problems and Blaster
hi developers!
...if you send screenshots, please try to reduce data.
Any data you send is multiplied by amount of mailing-list users.
if you working on those expensive,critical core IT like s390/z, or with linux,
you should also know, that an image having lower than 20 different colors and
If you mean legacy PUs and IFL PUs, then answer is: yes. You can mix it in a mainframe.
BUT:
You can not mix it in one LPar.
Maybe it will be developed by IBM when there are customer request for hosting z/OS,
OS/390, maybe old MVSs AND z/Linux in a big z/VM LPar.
regards
fs
]
-Ursprüngliche Nachricht-
Von: Linux on 390 Port [mailto:[EMAIL PROTECTED] Auftrag von
Rob van der Heij
Gesendet: Dienstag, 20. Juli 2004 09:09
An: [EMAIL PROTECTED]
Betreff: Re: WG: Performance of large file systems
Frank Schwede, LSY wrote:
Im sorry, but:
What is going on here?
I
Im sorry, but:
What is going on here?
I was sure, someone would give a statement about the real current state of PAV on
Escon DASDs.
Are there still problems replying to posts?
What I/O performance do you get, and how?
At moment we are creating a concept with a combination of raid-striping
I am sorry, I dont want to get that wrong:
You are really using PAV since 4 years?
You are able to access ONE Dasd parallel over multiple different escon channels at the
same time?
What I/O did you measure?
We get 6-9 MB/sec with normal access, until 30-40 MB/s over 8 Channels and 8 Stripes
Should be no problem, to do so.
You have to have an FTP Server with those installation files, an IPL-tape or
a HMC+CD to IPL and of sure a/some 3390 DASDs.
You can migrate this lpar-installed system without any problems into a zVM.
If you take care of some VM and TCPIP/subnet definitions
Hi!
This WinTel/Mainframe-article is a price/performance-only-comparison. Even
with a transcendental tuning it would be hard to beat WinTel in such a kind
of benchmark.
But did you notice, that the mainframe-costs are the same for 1 server and
for 96 servers...? :) (I dont have the time to
try a q vdisk syslim
customize with set vdisk syslim and set vdisk userlim
regard
frank
-Ursprüngliche Nachricht-
Von: Stefan Kopp [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 19. August 2003 12:27
An: [EMAIL PROTECTED]
Betreff: vdisk space not available!?
Hi list,
I try to define a
Hello!
We ar going to implement an OSPF routing with zebra in an Redhat 2.4.9 Lpar.
We have 4 OSA-E cards installed and IP-d in this LPar. There only one
active, which is connected to OSPF-Routers in the Backbone...
It is the first step to VIPA (we think).
we first started the zebra -d
but
I also tested (mail-)performance with a z/Linux and a Intel/Linux
z/Linux: 1024 MB RAM, 45 Mips min, 1 2064-CPU=250 Mips max, BogoMips about
770 (?)
x86/Linux : 256 MB RAM, PIII 700, about 1200 BogoMips (?)
There where a third x86/Linux with a P133 64MB RAM, which was configured to
relay mails
13 matches
Mail list logo