iscsi multipath failure with libvirtError: Failed to open file '/dev/mapper/Mar': No such file or directory

2015-03-23 Thread mad Engineer
hello All,
  I know the issue is related to libvirt,but i dont know
where to ask.

i have centos 6.6 running KVM as compute node in openstack icehouse

when i try to attach volume to instance it shows

2596: error : virStorageFileGetMetadataRecurse:952 : Failed to open
file '/dev/mapper/Mar': No such file or directory

in libvirt log

This does not always happen when it happens no one will be able to
attach volume to instance


using EMC VNX as storage backend.


multipath.conf


# Skip the files uner /dev that are definitely not FC/iSCSI devices
# Different system may need different customization
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*
devnode ^hd[a-z][0-9]*
devnode ^cciss!c[0-9]d[0-9]*[p[0-9]*]

# Skip LUNZ device from VNX
device {
vendor DGC
product LUNZ
}
}

defaults {
user_friendly_names no
flush_on_last_del yes
}

devices {
# Device attributed for EMC CLARiiON and VNX series ALUA
device {
vendor DGC
product .*
product_blacklist LUNZ
path_grouping_policy group_by_prio
path_selector round-robin 0
path_checker emc_clariion
features 1 queue_if_no_path
hardware_handler 1 alua
prio alua
failback immediate
}
}


Can any one help me with this issue
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Number of threads for virtual machine process

2015-02-16 Thread mad Engineer
Thanks for the response.
   Can a vm process create any number of extra threads based
on i/o requirement ,or is there any relation between number of VCPU
allowed and these extra threads.

I hope these threads are for disk i/o and for network it uses
vhost-'pid' process.


Thanks for your help


On Mon, Feb 16, 2015 at 9:21 PM, Paolo Bonzini pbonz...@redhat.com wrote:


 On 16/02/2015 11:54, mad Engineer wrote:
 Hello all,
  On a RHEL 6.4 server i created a vm with 2 VCPU and
 expecting to see single process with 2 threads on host.
 but
   top -p pid-of-qemu -H

   shows many threads randomly being created and
 destroyed,pid-of-qemu remains the same but other threads' pid keep
 on changing.

 The extra threads are doing I/O.

 Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Number of threads for virtual machine process

2015-02-16 Thread mad Engineer
Hello all,
 On a RHEL 6.4 server i created a vm with 2 VCPU and
expecting to see single process with 2 threads on host.
but
  top -p pid-of-qemu -H

  shows many threads randomly being created and
destroyed,pid-of-qemu remains the same but other threads' pid keep
on changing.

Can some one help me understand this better.I was in a belief that
virtual machine with 2 VCPU will have 2 threads,but sometimes total
thread count has reached upto 8.

top output from host when running multiple dd

6 qemu  20   0 2480m 381m 4664 R 100.2  3.9   1:57.97
/usr/libexec/qemu-kvm -name HAP -S -M rhel6.1.0 -enable-kvm -m 2048
-smp 2,sockets=2,cores=1,threads=1 -uuid 2b8c1cd4-2de7-e59
7 qemu  20   0 2480m 381m 4664 R 99.6  3.9   0:34.97
/usr/libexec/qemu-kvm -name HAP -S -M rhel6.1.0 -enable-kvm -m 2048
-smp 2,sockets=2,cores=1,threads=1 -uuid 2b8c1cd4-2de7-e594
11101 qemu  20   0 2480m 381m 4664 S  0.0  3.9   0:13.33
/usr/libexec/qemu-kvm -name HAP -S -M rhel6.1.0 -enable-kvm -m 2048
-smp 2,sockets=2,cores=1,threads=1 -uuid 2b8c1cd4-2de7-e594
31278 qemu  20   0 2480m 381m 4664 S  0.0  3.9   0:00.00
/usr/libexec/qemu-kvm -name HAP -S -M rhel6.1.0 -enable-kvm -m 2048
-smp 2,sockets=2,cores=1,threads=1 -uuid 2b8c1cd4-2de7-e594


HT is ON on host.

Even Though total CPU consumed by first two threads are 200%,i don't
understand why there are these many extra threads

Thanks
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Centos 6.6 on Centos 6.6,internal error process exited

2015-01-31 Thread mad Engineer
I have a server with centos 6.5 installed and later upgraded to 6.6
after restart, installed kvm and created centos 6.5 as guest and later
upgraded guest to 6.6.
Now after host reboot i am not able to start guest as it shows


Internal error process exited while reading console log output
Supported machines are:
pc   RHEL 6.5 pc
RHEL 6.5.0  RHEL 6.5 pc
RHEL 6.4.0  RHEL 6.4 pc 


Even though by changing Machine in xml config from RHEL 6.6.0  to
RHEL 6.5.0 vm has started and its working.

But why is 6.6 guest not supported in 6.6 host,i suspect its because
of some upgradation issue from 6.5 to 6.6.Can some one please help me
understand this.


Thanks
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Network performance drop when compared to other hypervisor with vhost_net on for UDP

2015-01-14 Thread mad Engineer
I am running RHEL6.5 as Host and Guest on HP server.
Server has 128G and 48 Core[with HT enabled.]

3 VMs are running 2 pinned to first 24 PCPU with proper NUMA pinning,

Guests:

VM1:
6 VCPU  pinned to 6 PCPU NUMA node 1,with 16G RAM

VM2:
6 VCPU pinned to 6 PCPU on NUMA node 0,with 16G RAM

VM3:
2 VCPU ,no pinning,4G RAM

HOST
host has 10 free CPU+24 HT threads which is not allocated and is available.
Host also runs a small application that is single threaded,that uses ~4G RAM.

Total resource to host is 10 CPU+24 HT=34 and 92G unallocated RAM[VMS
dont even use 70% of allocated RAM] also ksm is not running.

Networking:
Uses linux bridge connected to 1Gbps eth0,with ip assigned on eth0
[This IP is called for accessing application running on host]
All vms use virtio and VHOST is on .

Traffic on virtual machines are ~3MBps and combined traffic on host is ~14MBps

VHOST-pid-of-qemu-process sometimes uses ~35% CPU.


There is no packet loss,drop or latency,but the issue is with the same
setup on Vmware with same sizing of virtual machines,with the only
difference as application running on host has moved to fourth VM.So in
Vmware there are 4 VMs.
Application gives better number ie on KVM that number is 310 and on
vmware it is 570.Application uses UDP to communicate.

I tried removing VHOST,still value is same.(I hope VHOST-NET UDP issue
is solved)

Thanks for any help
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ovirt-users] Network performance drop when compared to other hypervisor with vhost_net on for UDP

2015-01-14 Thread mad Engineer
Thanks Martin,
  How can we see the changes made by tuned?
for virtual guest i see it changes scheduler to deadline.Is there
any way to see what parameters each  profile is going to change

Thanks

On Wed, Jan 14, 2015 at 6:14 PM, Martin PavlĂ­k mpav...@redhat.com wrote:
 Hi,

 from the top of my head you could try to play with tuned both with guest and 
 host

 ###Install###
  yum install tuned
  /etc/init.d/tuned start
  chkconfig tuned on

 ###usage###
 list the profile:
  tuned-adm list

 change your profile:
 tuned-adm profile throughput-performance

 maybe try to experiment with other profiles.

 HTH

 Martin Pavlik
 RHEV QE

 On 14 Jan 2015, at 12:06, mad Engineer themadengin...@gmail.com wrote:

 I am running RHEL6.5 as Host and Guest on HP server.
 Server has 128G and 48 Core[with HT enabled.]

 3 VMs are running 2 pinned to first 24 PCPU with proper NUMA pinning,

 Guests:

 VM1:
 6 VCPU  pinned to 6 PCPU NUMA node 1,with 16G RAM

 VM2:
 6 VCPU pinned to 6 PCPU on NUMA node 0,with 16G RAM

 VM3:
 2 VCPU ,no pinning,4G RAM

 HOST
 host has 10 free CPU+24 HT threads which is not allocated and is available.
 Host also runs a small application that is single threaded,that uses ~4G RAM.

 Total resource to host is 10 CPU+24 HT=34 and 92G unallocated RAM[VMS
 dont even use 70% of allocated RAM] also ksm is not running.

 Networking:
 Uses linux bridge connected to 1Gbps eth0,with ip assigned on eth0
 [This IP is called for accessing application running on host]
 All vms use virtio and VHOST is on .

 Traffic on virtual machines are ~3MBps and combined traffic on host is 
 ~14MBps

 VHOST-pid-of-qemu-process sometimes uses ~35% CPU.


 There is no packet loss,drop or latency,but the issue is with the same
 setup on Vmware with same sizing of virtual machines,with the only
 difference as application running on host has moved to fourth VM.So in
 Vmware there are 4 VMs.
 Application gives better number ie on KVM that number is 310 and on
 vmware it is 570.Application uses UDP to communicate.

 I tried removing VHOST,still value is same.(I hope VHOST-NET UDP issue
 is solved)

 Thanks for any help
 ___
 Users mailing list
 us...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Again with same Cgroup issue :)

2014-12-17 Thread mad Engineer
Hello All,
 From the last couple of days i have been spamming this
mailing list with request for configuring Cgroup with libvirtd on
Centos systems.
 I still can not find a permanent solution to limit host
RAM to particular value,tried creating a separate hierarchy mykvm
and changed in sysconfig/libvirtd after that vm's memory cgroup
reflects this.But it is not obeying memory.limit_in_bytes set in
mykvm group,i als specified it in cgrules.conf and restarted it.If i
change that in /cgconfig/memory/mykvm/libvirt/qemu/memory.limit_in_bytes
 its working.But that is dynamic as i am not able to find a way to
mention that in cgconfig.conf.

How can i make sub hierarchies follow what is set in parent cgroup?
eg: if change variables in /cgconfig/memory/mykvm
then all instances coming under /cgconfig/memory/mykvm/libvirt/qemu/
should follow that value..How is it possible.Can you please help
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


C group hierarchy and libvirtd

2014-12-15 Thread mad Engineer
On Centos 6.4 x64,with libvirt-0.10.2-18.el6.x86_64  i am trying to
set memory.limit_in_bytes for all qemu process.
changed cgconfig.conf

group mygroup{
   perm {
admin {
uid = root;
gid = root;
}
task {
uid = qemu;
gid = kvm;
}
   }
   memory {
memory.limit_in_bytes = 61G;
  }
}

and also added  CGROUP_DAEMON=memory:/mygroup in sysconfig/libvirtd
and in cgrules.conf and then restarted services.

Now i can see created virtual machines use cgroup hierarchy
/cgroup/memory/mygroup/libvirt/qemu/virtualmachine1/ instead of
/cgroup/memory/libvirt/qemu/virtualmachine1/.

The issue is memory.limit_in_bytes set to mygroup is getting
applied only to libvirtd process.VM are not following
memory.limit_in_bytes set to mygroup.

1. how can i set this globally so that all virtual machines follow
that, i dont want to create a new group for that if libvirt supports
it.
2.Is there any way i can avoid extra hierarchy and create virtual
machines memory cgroup under /cgroup/memory/kvm/ instead of
/cgroup/memory/mygroup/libvirt/qemu/ ?

please help me fix this issue.I have this working on ubuntu servers
where i specified libvirt-qemu user and VMs follow that cgroup for
memory.


Regards,
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


using cgroups with KVM

2014-12-14 Thread mad Engineer
hello all,
i am trying to limit RAM usage of guests using memory cgroup.
testing on a virtual machine with Nested virtualization enabled.

What happens when memory.usage_in_bytes = memory.limit_in_bytes ?

Is it going to swap extra memory?

Thanks for your help
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


odd number of VCPU and performance impact?

2014-12-12 Thread mad Engineer
Hi all,
Is it a good practice to always create vm with even number of VCPU?
What could be the impact if we create vms with odd number of CPU on
NUMA or SMP systems.Is there any recommendation
Thanks
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


ksmd high cpu usage from almost a week with just one vm running

2014-12-06 Thread mad Engineer
Hello All,
 I am using centos6.5 x64 on a server with 48 G RAM and 8
Cores.Managed by Ovirt
There is only one running VM with RAM 34 G and with 6 VCPU (pinned to
proper numa nodes)

from top

top - 06:42:48 up 67 days, 20:05,  1 user,  load average: 0.26, 0.20, 0.17
Tasks: 285 total,   2 running, 282 sleeping,   0 stopped,   1 zombie
Cpu(s):  1.0%us,  1.4%sy,  0.0%ni, 97.5%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  49356468k total, 33977684k used, 15378784k free,   142812k buffers
Swap: 12337144k total,0k used, 12337144k free,   343052k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  101 root  25   5 000 R 27.4  0.0   5650:04 [ksmd]
26004 vdsm   0 -20 3371m  64m 9400 S  9.8  0.1   1653:27
/usr/bin/python /usr/share/vdsm/vdsm --pidfile /var/run/vdsm/vdsmd.pid
20963 qemu  20   0 38.5g  33g 6792 S  3.9 71.6   5225:43
/usr/libexec/qemu-kvm -name Cinder -S -M rhel6.5.0 -cpu Nehalem
-enable-kvm -m 34096 -realtime mlock=off -smp
6,maxcpus=160,sockets=80,c

from /sys/kernel/mm/ksm
pages_unshared  7602322
pages_shared 207023
pages_to_scan   64
pages_volatile31678

Any idea why ksmd is not coming to normal CPU usage ,on a different
server ksmd was disabled and for testing when i enabled it initially
CPU usage was high but later settled down to 3% ,in that host i have 4
VMs running.

Before turning off ksmd can any one help me find out why ksmd is
behaving like this.Initially it had 2 virtual machines,because of high
CPU utilization of this guest other is migrated to another host.

Thanks
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Allocating dedicated RAM to host that guest can not use

2014-11-27 Thread mad Engineer
Hi,
Is there any way to set some RAM dedicated to host that guest can
not access?
Similar to setting RAM to Dom0 in Xen.

I am over committing RAM for the instances but don't want host to swap.

i understand that virtual machines are process,but can we achieve this

Thanks
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Allocating dedicated RAM to host that guest can not use

2014-11-27 Thread mad Engineer
never tried that.
can we do that transparently ie with out setting cgroups for each
virtul machines?.
A global group such that all combined virtual machines RAM utilization
to be with in a specific value ?

On Thu, Nov 27, 2014 at 4:59 PM, Wanpeng Li wanpeng...@linux.intel.com wrote:
 On Thu, Nov 27, 2014 at 05:12:52PM +0530, mad Engineer wrote:
Hi,
Is there any way to set some RAM dedicated to host that guest can
not access?
Similar to setting RAM to Dom0 in Xen.

I am over committing RAM for the instances but don't want host to swap.

i understand that virtual machines are process,but can we achieve this

 How about limit the memory of which guest can access through memory cgroup?

 Regards,
 Wanpeng Li


Thanks
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Allocating dedicated RAM to host that guest can not use

2014-11-27 Thread mad Engineer
a random thought can we set qemu user/group rss to a particular hard
limit in limits.conf

Can this work?

On Thu, Nov 27, 2014 at 5:39 PM, mad Engineer themadengin...@gmail.com wrote:
 never tried that.
 can we do that transparently ie with out setting cgroups for each
 virtul machines?.
 A global group such that all combined virtual machines RAM utilization
 to be with in a specific value ?

 On Thu, Nov 27, 2014 at 4:59 PM, Wanpeng Li wanpeng...@linux.intel.com 
 wrote:
 On Thu, Nov 27, 2014 at 05:12:52PM +0530, mad Engineer wrote:
Hi,
Is there any way to set some RAM dedicated to host that guest can
not access?
Similar to setting RAM to Dom0 in Xen.

I am over committing RAM for the instances but don't want host to swap.

i understand that virtual machines are process,but can we achieve this

 How about limit the memory of which guest can access through memory cgroup?

 Regards,
 Wanpeng Li


Thanks
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html