Re: [ovirt-users] Loopback interface has huge network transctions

2015-01-26 Thread Martin PavlĂ­k
Hi Punit,

it is ok since ovirt-engine is using loopback for its purposes, e.g. postgress 
databas access. Try to check netstat -putna | grep 127.0.0 to see how many 
things are attached to it.

If you are interested in checking what is going on a bit more have a look @ 
this great how-to 
http://www.slashroot.in/find-network-traffic-and-bandwidth-usage-process-linux 
http://www.slashroot.in/find-network-traffic-and-bandwidth-usage-process-linux


HTH 

Martin Pavlik

RHEV QE

 On 26 Jan 2015, at 02:24, Punit Dambiwal hypu...@gmail.com wrote:
 
 Hi,
 
 I have noticed that the loop back interface has huge network packets sent and 
 received...is it common or need to some tweaks 
 
 1. Ovirt 3.5.1
 2. Before ovirt engine installation...loop back address doesn't has that huge 
 amount of packets sent/receive
 3. After Ovirt engine install it's keep increasing.and in just 48 hours 
 it reach to 35 GB...
 
 [root@ccr01 ~]# ifconfig
 eth0: flags=4163UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
 inet 43.252.x.x  netmask 255.255.255.0  broadcast 43.252.x.x
 ether 60:eb:69:82:0b:4c  txqueuelen 1000  (Ethernet)
 RX packets 6605350  bytes 6551029484 (6.1 GiB)
 RX errors 0  dropped 120622  overruns 0  frame 0
 TX packets 2155524  bytes 431348174 (411.3 MiB)
 TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 device memory 0xdf6e-df70
 
 eth1: flags=4163UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
 inet 10.10.0.2  netmask 255.255.255.0  broadcast 10.10.0.255
 ether 60:eb:69:82:0b:4d  txqueuelen 1000  (Ethernet)
 RX packets 788160  bytes 133915292 (127.7 MiB)
 RX errors 0  dropped 0  overruns 0  frame 0
 TX packets 546352  bytes 131672255 (125.5 MiB)
 TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 device memory 0xdf66-df68
 
 lo: flags=73UP,LOOPBACK,RUNNING  mtu 65536
 inet 127.0.0.1  netmask 255.0.0.0
 loop  txqueuelen 0  (Local Loopback)
 RX packets 84747311  bytes 40376482560 (37.6 GiB)
 RX errors 0  dropped 0  overruns 0  frame 0
 TX packets 84747311  bytes 40376482560 (37.6 GiB)
 TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 
 [root@ccr01 ~]# w
  09:23:07 up 2 days, 11:43,  1 user,  load average: 0.27, 0.30, 0.31
 USER TTYLOGIN@   IDLE   JCPU   PCPU WHAT
 root pts/0 09:163.00s  0.01s  0.00s w
 [root@ccr01 ~]#
 
 Thanks,
 Punit
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Loopback interface has huge network

2015-01-26 Thread Nikolai Sednev
 
Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf? 
Message-ID: 
610569491.313815.1422275879664.javamail.zim...@redhat.com 
Content-Type: text/plain; charset=utf-8 

- Original Message - 
 From: Dan Kenigsberg dan...@redhat.com 
 To: Gianluca Cecchi gianluca.cec...@gmail.com, nsof...@redhat.com 
 Cc: users users@ovirt.org, ykap...@redhat.com 
 Sent: Monday, January 26, 2015 2:09:23 PM 
 Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf? 
 
 On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote: 
  Hello, 
  on my all-in-one installation @home I had 3.5.0 with F20. 
  Today I updated to 3.5.1. 
  
  it seems it modified /etc/multipath.conf preventing me from using my second 
  disk at all... 
  
  My system has internal ssd disk (sda) for OS and one local storage domain 
  and another disk (sdb) with some partitions (on one of them there is also 
  another local storage domain). 
  
  At reboot I was put in emergency boot because partitions at sdb disk could 
  not be mounted (they were busy). 
  it took me some time to understand that the problem was due to sdb gone 
  managed as multipath device and so busy for partitions to be mounted. 
  
  Here you can find how multipath became after update and reboot 
  https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing
   
  
  No device-mapper-multipath update in yum.log 
  
  Also it seems that after changing it, it was then reverted at boot again (I 
  don't know if the responsible was initrd/dracut or vdsmd) so in the mean 
  time the only thing I could do was to make the file immutable with 
  
  chattr +i /etc/multipath.conf 
 
 The supported method of achieving this is to place # RHEV PRIVATE in 
 the second line of your hand-modified multipath.conf 
 
 I do not understand why this has happened only after upgrade to 3.5.1 - 
 3.5.0's should have reverted you multipath.conf just as well during each 
 vdsm startup. 
 
 The good thing is that this annoying behavior has been dropped from the 
 master branch, so that 3.6 is not going to have it. Vdsm is not to mess 
 with other services config file while it is running. The logic moved to 
 `vdsm-tool configure` 
 
  
  and so I was able to reboot and verify that my partitions on sdb were ok 
  and I was able to mount them (for safe I also ran an fsck against them) 
  
  Update ran around 19:20 and finished at 19:34 
  here the log in gzip format 
  https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing
   
  
  Reboot was done around 21:10-21:14 
  
  Here my /var/log/messages in gzip format, where you can see latest days. 
  https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing
   
  
  
  Any suggestion appreciated. 
  
  Current multipath.conf (where I also commented out the getuid_callout that 
  is not used anymore): 
  
  [root@tekkaman setup]# cat /etc/multipath.conf 
  # RHEV REVISION 1.1 
  
  blacklist { 
  devnode ^(sda|sdb)[0-9]* 
  } 


I think what happened is: 

1. 3.5.1 had new multipath version 
2. So vdsm upgraded the local file 
3. blacklist above was removed 
(it should exists in /etc/multipath.bak) 

To prevent local changes, you have to mark the file as private 
as Dan suggests. 

Seems to be related to the find_multiapth = yes bug: 
https://bugzilla.redhat.com/show_bug.cgi?id=1173290 

Ben, can you confirm that this is the same issue? 

  
  defaults { 
  polling_interval 5 
  #getuid_callout /usr/lib/udev/scsi_id --whitelisted 
  --replace-whitespace --device=/dev/%n 
  no_path_retry fail 
  user_friendly_names no 
  flush_on_last_del yes 
  fast_io_fail_tmo 5 
  dev_loss_tmo 30 
  max_fds 4096 
  } 
 

Regards, 
Nir 


-- 

Message: 3 
Date: Mon, 26 Jan 2015 14:25:23 +0100 
From: Martin Pavl?k mpav...@redhat.com 
To: Punit Dambiwal hypu...@gmail.com 
Cc: users@ovirt.org users@ovirt.org 
Subject: Re: [ovirt-users] Loopback interface has huge network 
transctions 
Message-ID: 6e2536e1-27ae-47eb-bca7-88b3901d2...@redhat.com 
Content-Type: text/plain; charset=us-ascii 

Hi Punit, 

it is ok since ovirt-engine is using loopback for its purposes, e.g. postgress 
databas access. Try to check netstat -putna | grep 127.0.0 to see how many 
things are attached to it. 

If you are interested in checking what is going on a bit more have a look @ 
this great how-to 
http://www.slashroot.in/find-network-traffic-and-bandwidth-usage-process-linux 
http://www.slashroot.in/find-network-traffic-and-bandwidth-usage-process-linux
 


HTH 

Martin Pavlik 

RHEV QE 

 On 26 Jan 2015, at 02:24, Punit Dambiwal hypu...@gmail.com wrote: 
 
 Hi, 
 
 I have noticed that the loop back interface has huge network packets sent and 
 received...is it common or need to some tweaks 
 
 1. Ovirt 3.5.1 
 2. Before ovirt engine installation...loop back address doesn't has that huge 
 amount of packets sent/receive 
 3. After Ovirt engine install it's keep increasing

[ovirt-users] Loopback interface has huge network transctions

2015-01-25 Thread Punit Dambiwal
Hi,

I have noticed that the loop back interface has huge network packets sent
and received...is it common or need to some tweaks

1. Ovirt 3.5.1
2. Before ovirt engine installation...loop back address doesn't has that
huge amount of packets sent/receive
3. After Ovirt engine install it's keep increasing.and in just 48 hours
it reach to 35 GB...

[root@ccr01 ~]# ifconfig
eth0: flags=4163UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
inet 43.252.x.x  netmask 255.255.255.0  broadcast 43.252.x.x
ether 60:eb:69:82:0b:4c  txqueuelen 1000  (Ethernet)
RX packets 6605350  bytes 6551029484 (6.1 GiB)
RX errors 0  dropped 120622  overruns 0  frame 0
TX packets 2155524  bytes 431348174 (411.3 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device memory 0xdf6e-df70

eth1: flags=4163UP,BROADCAST,RUNNING,MULTICAST  mtu 1500
inet 10.10.0.2  netmask 255.255.255.0  broadcast 10.10.0.255
ether 60:eb:69:82:0b:4d  txqueuelen 1000  (Ethernet)
RX packets 788160  bytes 133915292 (127.7 MiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 546352  bytes 131672255 (125.5 MiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device memory 0xdf66-df68

lo: flags=73UP,LOOPBACK,RUNNING  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
loop  txqueuelen 0  (Local Loopback)
RX packets 84747311  bytes 40376482560 (37.6 GiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 84747311  bytes 40376482560 (37.6 GiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ccr01 ~]# w
 09:23:07 up 2 days, 11:43,  1 user,  load average: 0.27, 0.30, 0.31
USER TTYLOGIN@   IDLE   JCPU   PCPU WHAT
root pts/0 09:163.00s  0.01s  0.00s w
[root@ccr01 ~]#

Thanks,
Punit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users