[vdsm] if ipv6 ready and hot to use it?
hi,all if now we can use ipv6 like ipv4, most basic use case is ok, like adding node,setup node's ipv6 address, multi gateway... i tryied but failed. i did not know why. where is the related code about ipv6 , bith engine and vdsm sides, please guide me. thanks___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] deepcopy error when saveState
what is the reason of this error? Thread-20::DEBUG::2014-03-12 19:04:23,191::BindingXMLRPC::161::vds::(wrapper) client [192.168.0.113] Thread-20::DEBUG::2014-03-12 19:04:23,191::task::563::TaskManager.Task::(_updateState) Task=`30301b19-55ba-4152-98fd-0301d1c3975a`::moving from state init - state preparing File /usr/lib64/python2.7/copy.py, line 257, in _deepcopy_dict File /usr/lib64/python2.7/copy.py, line 190, in deepcopy File /usr/lib64/python2.7/copy.py, line 334, in _reconstruct File /usr/lib64/python2.7/copy.py, line 163, in deepcopy File /usr/lib64/python2.7/copy.py, line 257, in _deepcopy_dict File /usr/lib64/python2.7/copy.py, line 190, in deepcopy File /usr/lib64/python2.7/copy.py, line 334, in _reconstruct File /usr/lib64/python2.7/copy.py, line 163, in deepcopy File /usr/lib64/python2.7/copy.py, line 257, in _deepcopy_dict File /usr/lib64/python2.7/copy.py, line 190, in deepcopy File /usr/lib64/python2.7/copy.py, line 329, in _reconstruct File /usr/lib64/python2.7/copy_reg.py, line 93, in __newobj__ TypeError: object.__new__(PyCapsule) is not safe, use PyCapsule.__new__() Thread-13::DEBUG::2014-03-12 19:04:21,526::vm::2226::vm.Vm::(_startUnderlyingVm) vmId=`6d32200d-43ee-48a5-8ee9-fe6beaf2ca9d`::_ongoingCreations released Thread-13::INFO::2014-03-12 19:04:21,550::vm::2250::vm.Vm::(_startUnderlyingVm) vmId=`6d32200d-43ee-48a5-8ee9-fe6beaf2ca9d`::Skipping errors on recovery Traceback (most recent call last): File /usr/share/vdsm/vm.py, line 2232, in _startUnderlyingVm self.lastStatus = 'Up' File /usr/share/vdsm/vm.py, line 1915, in _set_lastStatus self.saveState() File /usr/share/vdsm/vm.py, line 2309, in saveState self._saveStateInternal() File /usr/share/vdsm/vm.py, line 2320, in _saveStateInternal toSave = deepcopy(self.status()) File /usr/lib64/python2.7/copy.py, line 163, in deepcopy File /usr/lib64/python2.7/copy.py, line 257, in _deepcopy_dict File /usr/lib64/python2.7/copy.py, line 163, in deepcopy File /usr/lib64/python2.7/copy.py, line 230, in _deepcopy_list File /usr/lib64/python2.7/copy.py, line 163, in deepcopy File /usr/lib64/python2.7/copy.py, line 257, in _deepcopy_dict File /usr/lib64/python2.7/copy.py, line 190, in deepcopy File /usr/lib64/python2.7/copy.py, line 334, in _reconstruct File /usr/lib64/python2.7/copy.py, line 163, in deepcopy File /usr/lib64/python2.7/copy.py, line 257, in _deepcopy_dict File /usr/lib64/python2.7/copy.py, line 190, in deepcopy File /usr/lib64/python2.7/copy.py, line 334, in _reconstruct File /usr/lib64/python2.7/copy.py, line 163, in deepcopy File /usr/lib64/python2.7/copy.py, line 257, in _deepcopy_dict File /usr/lib64/python2.7/copy.py, line 190, in deepcopy File /usr/lib64/python2.7/copy.py, line 329, in _reconstruct File /usr/lib64/python2.7/copy_reg.py, line 93, in __newobj__ TypeError: object.__new__(PyCapsule) is not safe, use PyCapsule.__new__()___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] fail to live migration, seem like a libvirt error
hi,all there are many channels like spicevmc,unix,vdsm-agent, if that devices impact live migration? from following erroe message, i have no idea where is the root cause? please take a look at it. thanks Thread-107294::DEBUG::2013-12-26 09:25:24,544::libvirtvm::358::vm.Vm::(cancel) vmId=`5819a89c-7e24-4f5a-8d42-24bfba3a39bf`::canceling migration downtime thread Thread-107294::DEBUG::2013-12-26 09:25:24,545::libvirtvm::417::vm.Vm::(stop) vmId=`5819a89c-7e24-4f5a-8d42-24bfba3a39bf`::stopping migration monitor thread Thread-107312::DEBUG::2013-12-26 09:25:24,545::libvirtvm::355::vm.Vm::(run) vmId=`5819a89c-7e24-4f5a-8d42-24bfba3a39bf`::migration downtime thread exiting Thread-107294::ERROR::2013-12-26 09:25:24,546::vm::196::vm.Vm::(_recover) vmId=`5819a89c-7e24-4f5a-8d42-24bfba3a39bf`::Failed to acquire lock: No such process Thread-107300::DEBUG::2013-12-26 09:25:24,546::vm::261::vm.Vm::(run) vmId=`43774d10-f550-42f6-81da-6a117df5135e`::migration semaphore acquired Thread-107294::ERROR::2013-12-26 09:25:24,690::vm::284::vm.Vm::(run) vmId=`5819a89c-7e24-4f5a-8d42-24bfba3a39bf`::Failed to migrate Traceback (most recent call last): File /usr/share/vdsm/vm.py, line 269, in run self._startUnderlyingMigration() File /usr/share/vdsm/libvirtvm.py, line 482, in _startUnderlyingMigration None, maxBandwidth) File /usr/share/vdsm/libvirtvm.py, line 518, in f ret = attr(*args, **kwargs) File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 87, in wrapper ret = f(*args, **kwargs) File /usr/lib64/python2.6/site-packages/libvirt.py, line 1178, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: Failed to acquire lock: No such process Thread-107300::DEBUG::2013-12-26 09:25:24,709::libvirtvm::457::vm.Vm::(_startUnderlyingMigration) vmId=`43774d10-f550-42f6-81da-6a117df5135e`::starting migration to qemu+tls://testserver1.com/system Thread-107327::DEBUG::2013-12-26 09:25:24,709::libvirtvm::343::vm.Vm::(run) vmId=`43774d10-f550-42f6-81da-6a117df5135e`::migration downtime thread started Thread-107328::DEBUG::2013-12-26 09:25:24,709::libvirtvm::379::vm.Vm::(run) vmId=`43774d10-f550-42f6-81da-6a117df5135e`::starting migration monitor thread libvirtEventLoop::DEBUG::2013-12-26 09:25:25,250::libvirtvm::2860::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`bdc42fa3-91f0-48d7-b7ed-4908bb86e262`::event Resumed detail 0 opaque None libvirtEventLoop::DEBUG::2013-12-26 09:25:25,307::libvirtvm::2860::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`bdc42fa3-91f0-48d7-b7ed-4908bb86e262`::event Resumed detail 1 opaque None Thread-107296::DEBUG::2013-12-26 09:25:25,331::libvirtvm::358::vm.Vm::(cancel) vmId=`bdc42fa3-91f0-48d7-b7ed-4908bb86e262`::canceling migration downtime thread Thread-107296::DEBUG::2013-12-26 09:25:25,332::libvirtvm::417::vm.Vm::(stop) vmId=`bdc42fa3-91f0-48d7-b7ed-4908bb86e262`::stopping migration monitor thread Thread-107296::ERROR::2013-12-26 09:25:25,332::vm::196::vm.Vm::(_recover) vmId=`bdc42fa3-91f0-48d7-b7ed-4908bb86e262`::Domain not found: no domain with matching name 'testZhuomian-1' Thread-107314::DEBUG::2013-12-26 09:25:25,332::libvirtvm::355::vm.Vm::(run) vmId=`bdc42fa3-91f0-48d7-b7ed-4908bb86e262`::migration downtime thread exiting Thread-107296::ERROR::2013-12-26 09:25:25,394::vm::284::vm.Vm::(run) vmId=`bdc42fa3-91f0-48d7-b7ed-4908bb86e262`::Failed to migrate Traceback (most recent call last): File /usr/share/vdsm/vm.py, line 269, in run self._startUnderlyingMigration() File /usr/share/vdsm/libvirtvm.py, line 482, in _startUnderlyingMigration None, maxBandwidth) File /usr/share/vdsm/libvirtvm.py, line 518, in f ret = attr(*args, **kwargs) File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 87, in wrapper ret = f(*args, **kwargs) File /usr/lib64/python2.6/site-packages/libvirt.py, line 1178, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: Domain not found: no domain with matching name 'testZhuomian-1'___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] what does custom option mean when vmcreate ?
the paramters received when vmCreate has a long part called custom, do we use custom infomation? what does the long key device_* means? thanks___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] what is the process of vmMigrate?
hi,all please explain what does both agent do when migrate? what do dest and src do?it use libvirt's migration or create a new vm throught vmCreate? thanks___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] start vm which in pool failed-please help test the code
At 2013-09-24 09:01:45,bigclouds bigclo...@163.com wrote: my code is attached 1. modify hooks.py in function _runHooksDir , add 'scriptenv['M_vmName'] = vmconf.get('vmName', )' for script needs vmname 2.modify vdsm service which call hooks.pyc from hooks.pyc to hooks.py , for u modify it above. 3.cp 40_guestname to /usr/.../.../hooks/before_vm_start/ 4. yum install libguestfs-winsupport-1.0-7.el6.x86_64 libguestfs-tools-c-1.16.34-2.el6.x86_64 python-libguestfs-1.16.34-2.el6.x86_64 libguestfs-1.16.34-2.el6.x86_64 libguestfs-tools-1.16.34-2.el6.x86_64 5.restart vdsm service 6.u need create a vm of a pool thanks. At 2013-09-23 16:51:40,Dan Kenigsberg dan...@redhat.com wrote: On Sun, Sep 22, 2013 at 05:57:25PM +0800, bigclouds wrote: hi, Dan i am happy to contribute my code until it is tested. Could you at least share the offending domxml? i am not sure if it is related to selinux because no matter i enable or disable selinux (SELINUX=disabled mode) the error remain. i define a xml recording to vdsm.log infos, and 'virsh start myvm', the same error occurs i copy the comand line which is recorded in libvirtd.log or myvm.log when start a vm, and lauch it directly, it starts without error. do u notice the confision, u can start it throught command line, but fail throught libvirt. i have checked almost every place like perm,ownner ship, lv active, backing file.etc i am going to cry. (._.) No need for that ;-) Which libvirt version do you use? which storage (nfs/block)? Could it be another case of the the libvirt regression about supplementarry groups? If so, https://rhn.redhat.com/errata/RHSA-2013-1272.html is out and a libvirt upgrade is most welcome. -error log- testname-1.log--- 2013-09-22 00:50:15.430+: starting up LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name testname-1 -S -M rhel6.4.0 -cpu Nehalem -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid 24f7e975-9aa5-4a14-b0f0-590add14c8b5 -smbios type=1,manufacturer=mcVdi,product=mcVdi Node,version=6-4.el6.centos.10,serial=25F59E10-794D-11E1-8835-3440B587CE3F_34:40:b5:87:ce:3f,uuid=24f7e975-9aa5-4a14-b0f0-590add14c8b5 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/testname-1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/c04b1d4f-abeb-4e64-8932-2f325a0a5af4,if=none,id=drive-ide0-0-0,format=qcow2,serial=ac025dc1-4e25-4b71-8c56-88dcb61b9f09,cache=none,werror=stop,rerror=stop,aio=native -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=31,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:05:b9,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/testname-1.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/testname-1.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -spice port=5904,tls-port=5905,addr=192.168.5.100,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 char device redirected to /dev/pts/4 qemu-kvm: -drive file=/rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/c04b1d4f-abeb-4e64-8932-2f325a0a5af4,if=none,id=drive-ide0-0-0,format=qcow2,serial=ac025dc1-4e25-4b71-8c56-88dcb61b9f09,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/c04b1d4f-abeb-4e64-8932
[vdsm] start vm which in pool failed
when starting a vm of a pool fails.in the process of hooks, i modify guestvm hostname, and modify the path of backing file of chain. do nothing else. i can munually define a xml(after hooks), and start it without error. env: libvirt-0.10.2-18.el6_4.5.x86_64 2.6.32-358.6.2.el6.x86_64 centos6.4 1.where does this error message come from ? Storage.StorageDomain WARNING Could not find mapping for lv d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/cae1bb2d-0529-4287-95a3-13dcb14f082f 2. Thread-6291::ERROR::2013-09-18 12:44:21,205::vm::683::vm.Vm::(_startUnderlyingVm) vmId=`24f7e975-9aa5-4a14-b0f0-590add14c8b5`::The vm start process failed Traceback (most recent call last): File /usr/share/vdsm/vm.py, line 645, in _startUnderlyingVm self._run() File /usr/share/vdsm/libvirtvm.py, line 1529, in _run self._connection.createXML(domxml, flags), File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 83, in wrapper ret = f(*args, **kwargs) File /usr/lib64/python2.6/site-packages/libvirt.py, line 2645, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/4 qemu-kvm: -drive file=/rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/cae1bb2d-0529-4287-95a3-13dcb14f082f,if=none,id=drive-ide0-0-0,format=qcow2,serial=ac025dc1-4e25-4b71-8c56-88dcb61b9f09,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /rhev/data-center/7828f2ae-955e-4e4b-a4bb-43807629dc52/d028d521-d4a9-4dd7-a0fe-3e9b60e7c4e4/images/ac025dc1-4e25-4b71-8c56-88dcb61b9f09/cae1bb2d-0529-4287-95a3-13dcb14f082f: Operation not permitted___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] question about injectfile hook
hi, shaharh: recently i work on injecting file into guestvm, i find a hook called injectfile in vdsm. i have a confusion about why injectfile hook takes qcow2 format as a special one which can not be handled? could you tell me your reason? as i know libguestfs certainly can be aware of qcow2 and all its feature like backing_file etc. it is a little hard to inject files into a image which has backing files(maybe backing chain) especially for images of block type with thin-providing. like a vm in pool. i am now writing code to inject files into a vm in pool. please tell me your ideas. and my questoin. thanks ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] qustion related to bootstrap
hi,all in bootstrap.py. there is a function named '_addNetwork', what is meaning of its params(vdcName, vdcPort), and what is the purpose of waitRouteRestore? thanks ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] how vdsm implement qcow2 on lvm
hi,all could you tell me how vdsm implement qcow2 on lvm? and how lv of qcow2 fromat support backing file? if the implememt is a normal way, now i find libguestfs does not recognize lvs(qcow2 format) with backing file thanks___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] what is 'qcow2 on LV' and how node communicate
hi,all 1.it seems that vdsm has the capability to thin providing lv(logical volume), could you please explain the theory of it? 2.i did not notice any code throught which NODE can communicate with each other, called mail-box. if it varies depends on volume type(block,fs), and what is and how they communicate via mail-box? thanks so much.___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] vm down for client reset
hi,all 1.it seems that there are many racing condition between channels. for example all channels of a vm connect to just only one client, if the client down, a variable shared between channels should be set. let see my problem. 1.server is aware of client is down (reset), 2.snd_receive still compare the numbers of data , why? why assert? each thread do not communicate with each other. ---log- handle_dev_display_connect: connect handle_new_display_channel: add display channel client handle_new_display_channel: add display channel client handle_new_display_channel: New display (client 0x7fe2408720f0) dcc 0x7fe1d899bc20 stream 0x7fe240cebbe0 handle_new_display_channel: jpeg disabled handle_new_display_channel: zlib-over-glz disabled listen_to_new_client_channel: NEW ID = 0 handle_new_display_channel: zlib-over-glz disabled listen_to_new_client_channel: NEW ID = 0 reds_show_new_channel: channel 9:2, connected successfully, over Secure link reds_handle_auth_mechanism: Auth method: 1 reds_show_new_channel: channel 9:3, connected successfully, over Secure link reds_handle_auth_mechanism: Auth method: 1 reds_show_new_channel: channel 9:1, connected successfully, over Secure link display_channel_client_wait_for_init: creating encoder with id == 0 red_channel_client_disconnect: 0x7fe240e09030 (channel 0x7fe2407cd2f0 type 3 id 0) red_peer_receive: Connection reset by peer red_channel_client_disconnect: 0x7fe1d899bc20 (channel 0x7fe1d80458d0 type 2 id 0) display_channel_client_on_disconnect: red_channel_client_disconnect: 0x7fe1d80c8890 (channel 0x7fe1d8045e90 type 4 id 0) snd_channel_put: sound channel freed snd_receive: ASSERT n failed /usr/lib64/libspice-server.so.1(+0xbf465)[0x7fe23d176465] /usr/lib64/libspice-server.so.1(+0x45d2a)[0x7fe23d0fcd2a] /usr/lib64/libspice-server.so.1(+0x45d80)[0x7fe23d0fcd80] /usr/libexec/qemu-kvm(+0x625df)[0x7fe23ef315df] /usr/libexec/qemu-kvm(+0x8428a)[0x7fe23ef5328a] /usr/libexec/qemu-kvm(main+0x154c)[0x7fe23ef3412c] /lib64/libc.so.6(__libc_start_main+0xfd)[0x7fe23c8a8cdd] /usr/libexec/qemu-kvm(+0x5f149)[0x7fe23ef2e149] 2013-05-10 05:55:38.945+: shutting down___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] 'cannot acquire state change lock' reappear again
i encounter an error of libvirt, which is reported in 2011. i intended to update the ticket(spice password) of vm. https://bugzilla.redhat.com/show_bug.cgi?id=676205 libvirt-0.10.2-18.el6_4.4.x86_64 libvirt-client-0.10.2-18.el6_4.4.x86_64 libvirt-python-0.10.2-18.el6_4.4.x86_64 libvirt-lock-sanlock-0.10.2-18.el6_4.4.x86_64 -log-- Thread-162::ERROR::2013-05-08 13:50:46,000::BindingXMLRPC::909::vdsm::(wrapper) libvirt error Traceback (most recent call last): File /usr/share/vdsm/BindingXMLRPC.py, line 904, in wrapper res = f(*args, **kwargs) File /usr/share/vdsm/BindingXMLRPC.py, line 213, in vmSetTicket return vm.setTicket(password, ttl, existingConnAction, params) File /usr/share/vdsm/API.py, line 550, in setTicket return v.setTicket(password, ttl, existingConnAction, params) File /usr/share/vdsm/libvirtvm.py, line 2245, in setTicket self._dom.updateDeviceFlags(graphics.toxml(), 0) File /usr/share/vdsm/libvirtvm.py, line 524, in f raise toe TimeoutError: Timed out during operation: cannot acquire state change lock___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] Too many open files
Thread-19::DEBUG::2013-05-09 09:02:15,701::domainMonitor::170::Storage.DomainMonitorThread::(_monitorDomain) Refreshing domain 3a570ac5-8814-4255-9834-e45d14939007 Thread-19::DEBUG::2013-05-09 09:02:15,701::misc::83::Storage.Misc.excCmd::(lambda) '/usr/bin/sudo -n /sbin/lvm vgs --config devices { preferred_names = [\\^/dev/mapper/\\] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\r%.*%\\ ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free 3a570ac5-8814-4255-9834-e45d14939007' (cwd None) Thread-19::ERROR::2013-05-09 09:02:15,701::sdc::150::Storage.StorageDomainCache::(_findDomain) Error while looking for domain `3a570ac5-8814-4255-9834-e45d14939007` Traceback (most recent call last): File /usr/share/mcvda/storage/sdc.py, line 145, in _findDomain File /usr/share/mcvda/storage/blockSD.py, line 1221, in findDomain File /usr/share/mcvda/storage/blockSD.py, line 1191, in findDomainPath File /usr/share/mcvda/storage/lvm.py, line 810, in getVG File /usr/share/vdsm/storage/lvm.py, line 542, in getVg File /usr/share/vdsm/storage/lvm.py, line 397, in _reloadvgs File /usr/lib64/python2.6/contextlib.py, line 34, in __exit__ File /usr/share/vdsm/storage/misc.py, line 1204, in acquireContext File /usr/share/vdsm/storage/lvm.py, line 369, in _reloadvgs File /usr/share/vdsm/storage/lvm.py, line 305, in cmd File /usr/share/vdsm/storage/misc.py, line 198, in execCmd File /usr/lib64/python2.6/site-packages/vdsm/betterPopen/__init__.py, line 46, in __init__ File /usr/lib64/python2.6/subprocess.py, line 632, in __init__ File /usr/lib64/python2.6/subprocess.py, line 1055, in _get_handles OSError: [Errno 24] Too many open files Thread-19::ERROR::2013-05-09 09:02:15,702::domainMonitor::208::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain 3a570ac5-8814-4255-9834-e45d14939007 monitoring information Traceback (most recent call last): File /usr/share/vdsm/storage/domainMonitor.py, line 186, in _monitorDomain File /usr/share/vdsm/storage/sdc.py, line 49, in __getattr__ File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain File /usr/share/vdsm/storage/sdc.py, line 121, in _realProduce File /usr/share/vdsm/storage/sdc.py, line 152, in _findDomain StorageDomainDoesNotExist: Storage domain does not exist: ('3a570ac5-8814-4255-9834-e45d14939007',)___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
[vdsm] what vmPayload used for?
hi,all: what vmPayload used for? thanks___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel
Re: [vdsm] [Engine-devel] Fwd: the procedure of storage-related
hi, maybe i do not make you understand me fully. i am reading vdsm code, but it is hard for me to understand StorageDomain,StoragePool. before stoargeD(P) can work, engine must prepare many things. i want to know the call flow (process) which must be done to make storageS(P) work. 1.for example, what funcitons you should call if make a stoargeDomain work. poolConnectStorageServer-poolCreate-poolConnect-domainCreate-domainActivate. thanks At 2013-04-28 17:40:29,Vered Volansky ve...@redhat.com wrote: - Forwarded Message - From: Vered Volansky ve...@redhat.com To: bigclouds bigclo...@163.com Sent: Sunday, April 28, 2013 12:39:31 PM Subject: Re: [Engine-devel] the procedure of storage-related Hi, Find my answers below. Best Regards, Vered - Original Message - From: bigclouds bigclo...@163.com To: engine-devel engine-de...@ovirt.org Sent: Saturday, April 27, 2013 8:40:56 AM Subject: [Engine-devel] the procedure of storage-related hi,all 1.i am now not very familar with the right procedure of storage, please give me a simply introduction of their concept and relation. help me out StoragePool,StorageDomain image,volume You should find the following link helpful: http://www.ovirt.org/Vdsm_Storage_Terminology As per your other questions, you might find the following guide helpul: http://www.ovirt.org/Quick_Start_Guide#Configure_Storage Of not, please rephrase them as they're not clear enough and I'd rather not guess. 2.please help me confirm. i find only one place call mount command, which is poolConnectStorageServer. if poolConnectStorageServer is before create and active StoragePool,StorageDomain? 3.if attach is just modify metadata, not anything else? 4.sp.py, deactivateSD if i deactivate ISO storage,it finally umount masterDir, why ? if masterDir is not mastersd? thanks ___ Engine-devel mailing list engine-de...@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ Engine-devel mailing list engine-de...@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel ___ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel