we saw something similar once.  it was a combination of a bad mproute/tcpip
configuration on the vm side and a bad config on the cisco side.

are you using mproute?  and what is the response between the guests in
relation to guest to world?

> -----Original Message-----
> From: valentin.zagar [mailto:[EMAIL PROTECTED]
> Sent: Thursday, February 26, 2004 3:57 AM
> To: [EMAIL PROTECTED]
> Subject: VM Linux OSA Problems
>
>
> Hello all!
>
> Please help :)))
>
> we're running SuSE SLES8 (64 bit, kernel 2.4.21.95) on Z/VM
> (64 bit, v 4.4). Hardware Z900 2064/102.
>
> ip add:
> eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100
>     link/ether 00:02:55:09:75:6f brd ff:ff:ff:ff:ff:ff
>     inet 10.15.4.4/16 brd 10.15.255.255 scope global eth0
>     inet6 fe80::2:5500:4509:756f/64 scope link
> (two linux have eth0: <BROADCAST,MULTICAST,PROMISC,UP> --- why????)
> and they all have the same mac address
>
> chandev.conf:
>
> noauto;qeth0,0xd508,0xd509,0xd50a;add_parms,0x10,0xd508,0xd50a
> ,portname:ZVMPORT0
>
>
> We set up 7 linux clones already. Everything is OK, no error
> messages, except bad network response. We're only runing one
> eth0 "card" on each linux (no VLAN-s, etc...) We use OSA FE
> in QDIO mode (we don't intend to run more than 20 linux
> guests), set to FULL DUPLEX 100MB in 100MB lan enviroment -
> OSA is pluged into cisco router switch (cisco port set to
> 100MB full duplex). When trying to reach linux guests via
> network, i get bad responses (no package loss). Ping response
> is usualy 0,5xx to 0,7xx ms, but with peaks to 6000 ms...
>
> 64 bytes from 10.15.4.4: icmp_seq=950 ttl=64 time=0.767 ms
> 64 bytes from 10.15.4.4: icmp_seq=951 ttl=64 time=88.1 ms
> 64 bytes from 10.15.4.4: icmp_seq=952 ttl=64 time=3.67 ms
> 64 bytes from 10.15.4.4: icmp_seq=953 ttl=64 time=352 ms
> 64 bytes from 10.15.4.4: icmp_seq=954 ttl=64 time=24.9 ms
> 64 bytes from 10.15.4.4: icmp_seq=955 ttl=64 time=40.7 ms
> 64 bytes from 10.15.4.4: icmp_seq=956 ttl=64 time=48.2 ms
> 64 bytes from 10.15.4.4: icmp_seq=957 ttl=64 time=132 ms
> 64 bytes from 10.15.4.4: icmp_seq=958 ttl=64 time=849 ms
> 64 bytes from 10.15.4.4: icmp_seq=959 ttl=64 time=944 ms
> 64 bytes from 10.15.4.4: icmp_seq=960 ttl=64 time=140 ms
> 64 bytes from 10.15.4.4: icmp_seq=961 ttl=64 time=32.9 ms
> 64 bytes from 10.15.4.4: icmp_seq=962 ttl=64 time=430 ms
> 64 bytes from 10.15.4.4: icmp_seq=963 ttl=64 time=986 ms
> 64 bytes from 10.15.4.4: icmp_seq=964 ttl=64 time=1038 ms
> 64 bytes from 10.15.4.4: icmp_seq=965 ttl=64 time=38.3 ms
> 64 bytes from 10.15.4.4: icmp_seq=966 ttl=64 time=40.3 ms
> 64 bytes from 10.15.4.4: icmp_seq=967 ttl=64 time=430 ms
> 64 bytes from 10.15.4.4: icmp_seq=968 ttl=64 time=38.0 ms
> 64 bytes from 10.15.4.4: icmp_seq=969 ttl=64 time=71.4 ms
> 64 bytes from 10.15.4.4: icmp_seq=970 ttl=64 time=26.4 ms
> 64 bytes from 10.15.4.4: icmp_seq=971 ttl=64 time=48.4 ms
> 64 bytes from 10.15.4.4: icmp_seq=972 ttl=64 time=82.3 ms
> 64 bytes from 10.15.4.4: icmp_seq=973 ttl=64 time=25.3 ms
> 64 bytes from 10.15.4.4: icmp_seq=974 ttl=64 time=132 ms
> 64 bytes from 10.15.4.4: icmp_seq=975 ttl=64 time=14.7 ms
> 64 bytes from 10.15.4.4: icmp_seq=976 ttl=64 time=19.0 ms
> 64 bytes from 10.15.4.4: icmp_seq=977 ttl=64 time=447 ms
> 64 bytes from 10.15.4.4: icmp_seq=978 ttl=64 time=1583 ms
> 64 bytes from 10.15.4.4: icmp_seq=979 ttl=64 time=583 ms
> 64 bytes from 10.15.4.4: icmp_seq=980 ttl=64 time=1641 ms
> 64 bytes from 10.15.4.4: icmp_seq=981 ttl=64 time=641 ms
> 64 bytes from 10.15.4.4: icmp_seq=982 ttl=64 time=1389 ms
> 64 bytes from 10.15.4.4: icmp_seq=983 ttl=64 time=389 ms
> 64 bytes from 10.15.4.4: icmp_seq=984 ttl=64 time=7272 ms
> 64 bytes from 10.15.4.4: icmp_seq=985 ttl=64 time=6272 ms
> 64 bytes from 10.15.4.4: icmp_seq=986 ttl=64 time=5272 ms
> 64 bytes from 10.15.4.4: icmp_seq=987 ttl=64 time=4273 ms
> 64 bytes from 10.15.4.4: icmp_seq=988 ttl=64 time=3273 ms
> 64 bytes from 10.15.4.4: icmp_seq=989 ttl=64 time=2273 ms
> 64 bytes from 10.15.4.4: icmp_seq=990 ttl=64 time=1273 ms
> 64 bytes from 10.15.4.4: icmp_seq=991 ttl=64 time=273 ms
> 64 bytes from 10.15.4.4: icmp_seq=992 ttl=64 time=5140 ms
> 64 bytes from 10.15.4.4: icmp_seq=993 ttl=64 time=4141 ms
> 64 bytes from 10.15.4.4: icmp_seq=994 ttl=64 time=3141 ms
> 64 bytes from 10.15.4.4: icmp_seq=995 ttl=64 time=2141 ms
> 64 bytes from 10.15.4.4: icmp_seq=996 ttl=64 time=1141 ms
> 64 bytes from 10.15.4.4: icmp_seq=997 ttl=64 time=141 ms
> 64 bytes from 10.15.4.4: icmp_seq=998 ttl=64 time=322 ms
> 64 bytes from 10.15.4.4: icmp_seq=999 ttl=64 time=723 ms
> 64 bytes from 10.15.4.4: icmp_seq=1000 ttl=64 time=19.9 ms
> 64 bytes from 10.15.4.4: icmp_seq=1001 ttl=64 time=0.749 ms
> 64 bytes from 10.15.4.4: icmp_seq=1002 ttl=64 time=0.591 ms
> 64 bytes from 10.15.4.4: icmp_seq=1003 ttl=64 time=0.638 ms
> 64 bytes from 10.15.4.4: icmp_seq=1004 ttl=64 time=0.649 ms
>
> When trying to access linux guest via http - responses are
> also bad. They get better after re-ipl-ing the VM. We had
> also some problems with storage - every monday at 1:15 am,
> linux (we don't know why) request a lot of storage.
> messagess around 1:15:
> (Feb 23 00:59:00 creepy4 /USR/SBIN/CRON[15972]: (root) CMD (
> rm -f /var/spool/cron/lastrun/cron.hourly)
> Feb 23 01:00:00 creepy4 /USR/SBIN/CRON[15976]: (root) CMD (
/usr/lib64/sa/sa1      )
> Feb 23 01:00:00 creepy4 /USR/SBIN/CRON[15978]: (root) CMD (
> test -x /usr/lib/secchk/security-control.sh &&
> /usr/lib/secchk/security-control.sh weekly &)
> Feb 23 01:10:00 creepy4 /USR/SBIN/CRON[16181]: (root) CMD (
/usr/lib64/sa/sa1      )
> Feb 23 01:20:00 creepy4 /USR/SBIN/CRON[16208]: (root) CMD (
/usr/lib64/sa/sa1      )
> Feb 23 01:30:00 creepy4 /USR/SBIN/CRON[16215]: (root) CMD (
/usr/lib64/sa/sa1      )
> Feb 23 01:40:02 creepy4 /USR/SBIN/CRON[16240]: (root) CMD (
/usr/lib64/sa/sa1      )
> Feb 23 01:50:01 creepy4 /USR/SBIN/CRON[16268]: (root) CMD (
/usr/lib64/sa/sa1
> )
> VM stoped dying after we drastic increased the amount of
> storage. After mondays, allocated page stays quite high:
> Q ALLOC PAGE
>             EXTENT EXTENT  TOTAL  PAGES   HIGH    %
> VOLID  RDEV  START    END  PAGES IN USE   PAGE USED
> ------ ---- ------ ------ ------ ------ ------ ----
> 440RES 3040    257    390  24120  22190  24119  91%
> 440W02 405E   1500   3337 330840 176063 330835  53%
> 440W03 307D    301   2300 360000 165157 360000  45%
>                           ------ ------        ----
> SUMMARY                        714960 363410         50%
> USABLE                            714960 363410         50%
>
> We don't know what to do... we tried several things. Do You
> have any idea? Hints?
>
>
> Because we got a hint that our alocated storage is mabe to
> big, we lower
> it - 128 MB, but than everything was even worse. So we put every linux
> to 256 MB of storage.
> Interesting thing is this example:
> we have 3,6 GB real memory dedicated to VM LPAR. We run 7
> linux boxes in
> this VM. We allocated 256 memory to each. So, we don't exceed
> the total
> physycal memory amount. Now, i start only one linux with
> nothing but apache
> 1.3 with php and mysql. Ping response is the same (bad up -
> to 1500ms), i
> get bad response with ssh session and bad html response. Then I start
> another linux and during his startup the first one responses
> got even worse.
> I can't explain this. I would understand if all of linux
> machines would be
> running. I think there's something wrong with VM or our VM
> configuration -
> like VM can't give to linuxes all resources asigned to it.
> We're reading al kind of tips in every book/webpage, but
> we're affraid,
> there's something major wrong - polishing comes later... :)
>
>
> Any idea from any1?
>
> Thanks!
>
> Regards,
>
> Valentin Zagar
> Informatika d.d.
> Slovenia
>
> P.S.:Excuse my bad english...
>

Reply via email to