http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses

2011-12-13 Thread Dor Laor
Hi,

I read the design here and I like to make sure that the future road map 
will expand beyond the current scope.

The current design totally rely on libvirt and does not parse the 
content of the PCI addressing. That's really really basic. The user 
should be able to specify pci slot allocation of his devices through the 
gui. I guess you won't be able to do that w/ the current scheme.

Also, what about devices that can't be hot plug (like qxl)? You need to 
reveal this info to the user. Currently we have ability in the kvm bios 
(seabios) to automatically disable the host plug of some critical 
devices like the vga driver (qxl) and others. The user should be allowed 
to hot plug/unplug only allowed devices.

You have to make your design work w/ pci bridges since we'll add it to 
qemu and once there is such VM (management should enable the bridge) 
there will be more pci devices available to it.

Regards,
Dor
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: vdsmd can't open connectio with libvirt

2011-12-13 Thread Jenna Johnson
Yes! Thanks,Mark, my problem got fixed :)

On 13 December 2011 03:43, Mark Wu wu...@linux.vnet.ibm.com wrote:

 **
 Jenna,
 Probably it's caused by the problem fixed in
 http://gerrit.ovirt.org/#change,216  You could have a try.


 On 12/13/2011 02:16 PM, Jenna Johnson wrote:

 Guys,

Installed most recent vdsmd and vdsclient on rehl 6.2.Could anybody
 help to take a look?

*After installation, vdsmd can't open connection with libvirt:*
   MainThread::INFO::2011-12-13 09:41:17,667::vdsm::76::vds::(run) VDSM
 main thread ended. Waiting for 1 other threads...
 MainThread::INFO::2011-12-13 09:41:17,667::vdsm::79::vds::(run)
 _MainThread(MainThread, started 140169981224704)
 MainThread::INFO::2011-12-13 09:41:17,667::vdsm::79::vds::(run)
 Thread(libvirtEventLoop, started daemon 140169902909184)
 MainThread::INFO::2011-12-13 09:41:17,715::vdsm::71::vds::(run) I am the
 actual vdsm 4.9-0
 MainThread::ERROR::2011-12-13 09:41:17,881::vdsm::74::vds::(run) Traceback
 (most recent call last):
   File /usr/share/vdsm/vdsm, line 72, in run
 serve_clients(log)
   File /usr/share/vdsm/vdsm, line 40, in serve_clients
 cif = clientIF.clientIF(log)
   File /usr/share/vdsm/clientIF.py, line 113, in __init__
 self._libvirt = libvirtconnection.get()
   File /usr/share/vdsm/libvirtconnection.py, line 111, in get
 conn = libvirt.openAuth('qemu:///system', auth, 0

 *for libvirt client trace:*
 09:41:20.182: 11842: debug : doRemoteOpen:511 : proceeding with name =
 qemu:///system
 09:41:20.182: 11842: debug : doRemoteOpen:521 : Connecting with transport 1
 09:41:20.182: 11842: debug : doRemoteOpen:568 : Proceeding with sockname
 /var/run/libvirt/libvirt-sock
 09:41:20.182: 11842: debug : doRemoteOpen:648 : Trying authentication
 09:41:20.183: 11842: debug : remoteAuthSASL:2618 : Client initialize SASL
 authentication
 09:41:20.185: 11842: debug : remoteAuthSASL:2680 : Client start
 negotiation mechlist 'DIGEST-MD5'
 09:41:20.185: 11842: debug : remoteAuthSASL:2730 : Server start
 negotiation with mech DIGEST-MD5. Data 0 bytes (nil)
 09:41:20.185: 11842: debug : remoteAuthSASL:2744 : Client step result
 complete: 0. Data 129 bytes 0x1fa7650
 09:41:20.185: 11842: debug : remoteAuthSASL:2787 : Client step result 1.
 Data 255 bytes 0x204eaa0
 09:41:20.185: 11842: debug : remoteAuthSASL:2800 : Server step with 255
 bytes 0x204eaa0
 09:41:20.186: 11842: error : virNetClientProgramDispatchError:170 :
 authentication failed: authentication failed
 09:41:20.186: 11842: debug : do_open:1062 : driver 2 remote returned ERROR

 *libvirtd trace:*
 13:55:36.455: 2213: error : remoteDispatchAuthSaslStep:2107 :
 authentication failed: authentication failed
 13:55:36.477: 2208: error : virNetSocketReadWire:911 : End of file while
 reading data: Input/output error
 13:55:36.664: 2212: error : virNetSASLSessionServerStep:624 :
 authentication failed: Failed to start SASL negotiation: -13 (SASL(-13):
 authentication failure: client response doesn't match what we generated)

 *checked sasdblist:*
 sasldblistusers2,see vdsm

 *libvirtd.conf:*
 listen_addr=0 # by vdsm
 unix_sock_group=kvm # by vdsm
 unix_sock_rw_perms=0770 # by vdsm
 auth_unix_rw=sasl # by vdsm
 save_image_format=lzop # by vdsm
 log_outputs=1:file:/var/log/libvirtd.log # by vdsm
 log_filters=1:libvirt 3:event 3:json 1:util 1:qemu # by vdsm
 auth_tcp=none # by vdsm
 listen_tcp=1 # by vdsm
 listen_tls=0 # by vdsm



 ___
 vdsm-devel mailing 
 listvdsm-devel@lists.fedorahosted.orghttps://fedorahosted.org/mailman/listinfo/vdsm-devel



___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [Engine-devel] http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses

2011-12-13 Thread Livnat Peer
On 12/13/2011 11:22 AM, Igor Lvovsky wrote:
 -Original Message-
 From: vdsm-devel-boun...@lists.fedorahosted.org [mailto:vdsm-devel-
 boun...@lists.fedorahosted.org] On Behalf Of Dor Laor
 Sent: Tuesday, December 13, 2011 10:02 AM
 To: engine-de...@ovirt.org; vdsm-devel@lists.fedorahosted.org
 Subject: http://www.ovirt.org/wiki/Features/Design/StablePCIAddresses

 Hi,

 I read the design here and I like to make sure that the future road map
 will expand beyond the current scope.

 The current design totally rely on libvirt and does not parse the
 content of the PCI addressing. That's really really basic. The user
 should be able to specify pci slot allocation of his devices through the
 gui. I guess you won't be able to do that w/ the current scheme.
 
 We know that the current design is not sufficient. This is exactly the 
 reason why I am working right now on new one that will give abilities to the 
 manager to change PCI addresses per device.  But in any case we planning 
 that the first addresses allocation will be done by libvirt and vdsm will 
 return it to the manager.
 I am not sure whether it will be accessible via GUI. Livnat?

Not sure we'll expose it in the UI/API for editing in the first version,
more likely it will be included in a view-only mode and then extended
for user manipulations.

Anyway we are working on a more explicit API then a Blob, as Igor wrote.

Livnat

 

 Also, what about devices that can't be hot plug (like qxl)? You need to
 reveal this info to the user. Currently we have ability in the kvm bios
 (seabios) to automatically disable the host plug of some critical
 devices like the vga driver (qxl) and others. The user should be allowed
 to hot plug/unplug only allowed devices.

 You have to make your design work w/ pci bridges since we'll add it to
 qemu and once there is such VM (management should enable the bridge)
 there will be more pci devices available to it.

 Regards,
 Dor
 ___
 vdsm-devel mailing list
 vdsm-devel@lists.fedorahosted.org
 https://fedorahosted.org/mailman/listinfo/vdsm-devel
 ___
 Engine-devel mailing list
 engine-de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [Engine-devel] shared fs support

2011-12-13 Thread Saggi Mizrahi
On Sun 11 Dec 2011 10:15:23 AM EST, Andrew Cathrow wrote:


 - Original Message -
 From: Saggi Mizrahismizr...@redhat.com
 To: VDSM Project Developmentvdsm-devel@lists.fedorahosted.org, 
 engine-de...@ovirt.org
 Sent: Friday, December 9, 2011 5:41:42 PM
 Subject: [Engine-devel] shared fs support


 Hi, I have preliminary (WIP) patches for shared FS up on gerrit.
 There is a lot of work to be done reorganizing the patches but I
 just wanted all the TLV guys to have a chance to look at it on
 Sunday.

 I did some testing and should work as expected for most cases.

 To test just connectStorageServer with storageType=6 (sharedfs)
 connection params are
 {'id'=1,
 'spec'='server:/export'
 'vfs_type'='nfs\gluster\smb'
 'mnt_options'='opt,opt=3,opt' }

 to check with an existing NFS domain you can just
 spec=server:/export
 vfs_type=nfs
 mnt_options=soft,timeo=600,retrans=6,nosharecache,vers=3

 So does that mean that we treat nfs custom types differently  -eg using the 
 out or process stuff?



 I only tested NFS but I am going to test more exotic stuff on Monday.

 This is the patch to build the RPM from.
 http://gerrit.ovirt.org/#change,560

 Have a good weekend

 ___
 Engine-devel mailing list
 engine-de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel


Using the custom NFS will give you the tested supported options and 
limits. Using sharedfs will give you a generic implementation. 
Currently the underlying implementation is the same. But there is a 
plan to use a simpler implementation (without using OOP as it's an NFS 
specific hack) and also loose stale handle checks and other NFS 
specific stuff.
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [Engine-devel] shared fs support

2011-12-13 Thread Dan Kenigsberg
On Tue, Dec 13, 2011 at 02:57:33PM -0500, Saggi Mizrahi wrote:
 On Sun 11 Dec 2011 10:15:23 AM EST, Andrew Cathrow wrote:
 
 
  - Original Message -
  From: Saggi Mizrahismizr...@redhat.com
  To: VDSM Project Developmentvdsm-devel@lists.fedorahosted.org, 
  engine-de...@ovirt.org
  Sent: Friday, December 9, 2011 5:41:42 PM
  Subject: [Engine-devel] shared fs support
 
 
  Hi, I have preliminary (WIP) patches for shared FS up on gerrit.
  There is a lot of work to be done reorganizing the patches but I
  just wanted all the TLV guys to have a chance to look at it on
  Sunday.
 
  I did some testing and should work as expected for most cases.
 
  To test just connectStorageServer with storageType=6 (sharedfs)
  connection params are
  {'id'=1,
  'spec'='server:/export'
  'vfs_type'='nfs\gluster\smb'
  'mnt_options'='opt,opt=3,opt' }
 
  to check with an existing NFS domain you can just
  spec=server:/export
  vfs_type=nfs
  mnt_options=soft,timeo=600,retrans=6,nosharecache,vers=3
 
  So does that mean that we treat nfs custom types differently  -eg using the 
  out or process stuff?
 
 
 
  I only tested NFS but I am going to test more exotic stuff on Monday.
 
  This is the patch to build the RPM from.
  http://gerrit.ovirt.org/#change,560
 
  Have a good weekend
 
  ___
  Engine-devel mailing list
  engine-de...@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/engine-devel
 
 
 Using the custom NFS will give you the tested supported options and 
 limits. Using sharedfs will give you a generic implementation. 
 Currently the underlying implementation is the same. But there is a 
 plan to use a simpler implementation (without using OOP as it's an NFS 
 specific hack) and also loose stale handle checks and other NFS 
 specific stuff.

Without a proof to the contraty, I would suspect that other shared file
system would have the tendency to disappear, leaving client application
in D state. We may need the oop hack for them, too.

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://fedorahosted.org/mailman/listinfo/vdsm-devel