Re: [Engine-devel] 3.3 scratch or upgraded installation must use Apache proxy (https://bugzilla.redhat.com/905754)

2013-05-27 Thread Simon Grinberg


- Original Message -
 From: Alon Bar-Lev alo...@redhat.com
 To: Sandro Bonazzola sbona...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org, Simon Grinberg 
 si...@redhat.com
 Sent: Thursday, May 23, 2013 6:18:39 PM
 Subject: Re: [Engine-devel] 3.3 scratch or upgraded installation must use 
 Apache  proxy
 (https://bugzilla.redhat.com/905754)
 
 
 
 - Original Message -
  From: Sandro Bonazzola sbona...@redhat.com
  To: Alon Bar-Lev alo...@redhat.com
  Cc: Barak Azulay bazu...@redhat.com, engine-devel
  engine-devel@ovirt.org, Alex Lourie alou...@redhat.com,
  Simon Grinberg si...@redhat.com
  Sent: Thursday, May 23, 2013 6:11:01 PM
  Subject: Re: [Engine-devel] 3.3 scratch or upgraded installation
  must use Apache proxy
  (https://bugzilla.redhat.com/905754)
  
  Il 23/05/2013 17:08, Alon Bar-Lev ha scritto:
  
   - Original Message -
   From: Alon Bar-Lev alo...@redhat.com
   To: Sandro Bonazzola sbona...@redhat.com
   Cc: Barak Azulay bazu...@redhat.com, engine-devel
   engine-devel@ovirt.org, Alex Lourie alou...@redhat.com,
   Simon Grinberg si...@redhat.com
   Sent: Thursday, May 23, 2013 6:07:31 PM
   Subject: Re: [Engine-devel] 3.3 scratch or upgraded installation
   must use
   Apache   proxy
   (https://bugzilla.redhat.com/905754)
  
  
  
   - Original Message -
   From: Sandro Bonazzola sbona...@redhat.com
   To: Alon Bar-Lev alo...@redhat.com
   Cc: Barak Azulay bazu...@redhat.com, engine-devel
   engine-devel@ovirt.org, Alex Lourie alou...@redhat.com
   Sent: Thursday, May 23, 2013 5:26:19 PM
   Subject: Re: [Engine-devel] 3.3 scratch or upgraded
   installation must use
   Apache  proxy
   (https://bugzilla.redhat.com/905754)
  
   Il 23/05/2013 16:19, Alon Bar-Lev ha scritto:
   - Original Message -
   From: Sandro Bonazzola sbona...@redhat.com
   To: Alon Bar-Lev alo...@redhat.com
   Cc: Barak Azulay bazu...@redhat.com, engine-devel
   engine-devel@ovirt.org, Alex Lourie alou...@redhat.com
   Sent: Thursday, May 23, 2013 5:01:58 PM
   Subject: Re: [Engine-devel] 3.3 scratch or upgraded
   installation must
   use
   Apacheproxy
   (https://bugzilla.redhat.com/905754)
   snip
  
   I think I was missing something.
   I don't know if other distro do the same, but on Fedora 18
   freeipa-server has a package conflict with mod_ssl.
   So it is not possible having both IPA and the oVirt engine on
   the same
   host.
   This should answer also for post IPA installation for Fedora.

That should not be a problem.
Don't forget that one of the popular use cases for virtualization is to resolve 
conflict in services while still using the same amount of hardware. Instead of 
investing a huge effort to resolve conflicts you segregate services into 
virtual machines that in turn can share the same physical server. 


  
   I think the best thing to do here is just warn that we are
   requiring
   mod_ssl when enabling SSL support so any service that has
   conflicts
   like
   freeipa-server will have issues
   and let the administrator decide what to do.
   We cannot warn... we attempt to configure it... and we do
   depend on
   it...
   So either it is installed before we run setup or we install it
   during(?!?!)
   setup.
  
   Alon
   So let me try to revert the logic.
   Can't we drop the dependency on mod_ssl and warn that if you
   want SSL
   support you've to install mod_ssl allowing the user to abort,
   install
   the module and run engine-setup again?
   Right... this is just for 3.3...
  
   This matches the logic of multiple execution of setup to add new
   components
   :)
  
   Not sure this will be acceptable by those who like
   'simple-on-click'
   installation.

No it won't - but it's a moot point by now 


  
   Regards.
   Alon
   Oh... sorry, we cannot work without SSL... it is not just a
   matter of
   support.
  
   We have to have valid SSL configuration or product will not work.
  
   Alon
  
  We need the configuration in place also if we don't use mod_ssl?
 
 We need active SSL configuration, as the application automatically
 redirect users to https.
 
 Alon
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] rewrite the engine-notifier service wrapper

2013-04-23 Thread Simon Grinberg


- Original Message -
 From: Yair Zaslavsky yzasl...@redhat.com
 To: Alon Bar-Lev alo...@redhat.com
 Cc: Juan Hernandez jhern...@redhat.com, Simon Grinberg 
 si...@redhat.com, engine-devel
 engine-devel@ovirt.org
 Sent: Tuesday, April 23, 2013 10:07:39 AM
 Subject: Re: [Engine-devel] rewrite the engine-notifier service wrapper
 
 
 
 - Original Message -
  From: Alon Bar-Lev alo...@redhat.com
  To: engine-devel engine-devel@ovirt.org
  Cc: Juan Hernandez jhern...@redhat.com, Simon Grinberg
  si...@redhat.com
  Sent: Tuesday, April 23, 2013 1:15:01 AM
  Subject: [Engine-devel] rewrite the engine-notifier service wrapper
  
  Hello,
  
  After finish to cleanup the engine-service to use downstream
  services and be
  portable to other distributions (upstart, openrc), we can use the
  same
  infrastructure for the engine notifier script.
  
  It is very similar to the engine-service in operation (running
  java), except
  it is written in bash instead of python and has no systemd support,
  and have
  its own configuration.
  
  What I suggest is to rewrite the notifier service in python and
  reuse the
  engine-service code.
  
  Also, drop the /etc/ovirt-engine/notifier/notifier.conf in favor of
  variables
  within the /etc/ovirt-engine/engine.conf[.d/*], this will provide a
  single
  point of configuration for all services.
  
  And of course proper systemd support and development environment
  support.
  
  Any comments? thoughts?
  
  Alon
 
 IMHO - +1 , the sooner the better - start with that and move to all
 the other bash scripts of the tools ;)

+1 


 
  ___
  Engine-devel mailing list
  Engine-devel@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/engine-devel
  
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] [vdsm] CPU Overcommit Feature

2012-12-20 Thread Simon Grinberg


- Original Message -
 From: Doron Fediuck dfedi...@redhat.com
 To: Dan Kenigsberg dan...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org, vdsm-de...@fedorahosted.org
 Sent: Thursday, December 20, 2012 2:14:45 PM
 Subject: Re: [Engine-devel] [vdsm]  CPU Overcommit Feature
 
 
 
 - Original Message -
  From: Dan Kenigsberg dan...@redhat.com
  To: Itamar Heim ih...@redhat.com
  Cc: engine-devel engine-devel@ovirt.org,
  vdsm-de...@fedorahosted.org
  Sent: Thursday, December 20, 2012 11:55:10 AM
  Subject: Re: [Engine-devel] [vdsm]  CPU Overcommit Feature
  
  On Thu, Dec 20, 2012 at 10:23:55AM +0200, Itamar Heim wrote:
   On 12/20/2012 09:43 AM, Dan Kenigsberg wrote:
   On Wed, Dec 19, 2012 at 09:53:15AM -0500, Doron Fediuck wrote:
   
   - Original Message -
   From: Dan Kenigsberg dan...@redhat.com
   To: Greg Padgett gpadg...@redhat.com
   Cc: engine-devel engine-devel@ovirt.org,
   vdsm-de...@fedorahosted.org
   Sent: Wednesday, December 19, 2012 3:59:11 PM
   Subject: Re: [Engine-devel] CPU Overcommit Feature
   
   On Mon, Dec 17, 2012 at 09:37:57AM -0500, Greg Padgett wrote:
   Hi,
   
   I've been working on a feature to allow CPU Overcommitment of
   hosts
   in a cluster.  This first stage allows the engine to consider
   host
   cpu threads as cores for the purposes of VM resource
   allocation.
   
   This wiki page has further details, your comments are
   welcome!
   http://www.ovirt.org/Features/cpu_overcommit
   
   I've commented about the vdsm/engine API on
   http://gerrit.ovirt.org/#/c/10144/ but it is probably better
   to
   reiterate it here.
   
   The suggested API is tightly coupled with an ugly hack we
   pushed
   to
   vdsm
   in order not to solve the issue properly on the first strike.
   
   If we had not have report_host_threads_as_cores, I think we'd
   have a
   simpler API reporting only cpuThreads and cpuCores; with no
   funny
   boolean flags.
   
   Let us strive to that position as much as we can.
   
   How about asking whoever used report_host_threads_as_cores to
   unset
   it
   once they install Engine 3.2 ? I think that these are very few
   people,
   that would not mind this very much.
   
   If this is impossible, I'd add a cpuCores2, always reporting
   the
   true
   number, to be used by new Engines. We may even report it only
   on
   the
   very few cases of report_host_threads_as_cores being set.
   
   Dan.
   
   Hi Dan,
   Thanks for the review.
   
   I agree simply reporting cores and threads would be the right
   solution.
   However, when you have hyperthreading turned off you get
   cores=threads.
   This is the same situation you have when hyperthreading turned
   on, and
   someone used the vdsm configuration of reporting threads as
   cores.
   
   So the engine won't know the real status of the host.
   
   This is not surprising, as report_host_threads_as_cores means in
   blunt
   English lie to Engine about the number of cores. The newly
   suggested
   flag says don't believe what I said in cpuCores, since I'm
   lying. Next
   thing we'd have is another flag saying that the former flag was
   a
   lie,
   and cpuCores is actually trustworthy.
   
   Instead of dancing this dance, I suggest to stop lying.
   
   report_host_threads_as_cores was a hack to assist a older Engine
   versions. Engine users that needed it had to set it out-of-band
   on
   their
   hosts. Now if they upgrade their Engine, they can -- as easily
   --
   reset
   that value.
   
   If they forget, nothing devastating happens beyond Engine
   assuming
   that
   hyperthreading is off.
   
   Please consider this suggestion. I find it the simplest for all
   involved
   parties.
   
   the only problem is the new vdsm doesn't know which engine may be
   using it.
   if engine would say getVdsCaps engineVersion=3.2, then vdsm
   could
   know engine no longer needs lying to and ignore the flag,
   re-using
   same field.
  
  Note that I do not suggest to drop report_host_threads_as_cores
  now.
  I am suggesting to keep on lying even to new Engine.
  If someone thinks that lying is bad, he should reset
  report_host_threads_as_cores.
  
  It seems to me that the suggested API is being coerced by a very
  limited
  use case, that is not going to be really harmed by a
  straight-forward
  API.
  
  Dan.
 
 Dan,
 Did some further checking, and we can go with it;
 So basically now we add cpuThreads. Additionally, if the
 report_host_threads_as_cores
 is turned on, an additional cpuCoresReal will be reported.

No need for that.
There is only one problematic state where VDSM cheats and reports cores == 
threads.
This does not happen by mistake, the user specifically asked for it.

The above condition above also happens if threads is really off or the 
processor does not have threads, so it's a state we need to handle in any case.

So just report threads=off, whenever cores == threads and treat it as such. If 
the user is unhappy then he 

Re: [Engine-devel] Report vNic Implementation Details

2012-11-23 Thread Simon Grinberg


- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Simon Grinberg si...@redhat.com
 Cc: engine-devel@ovirt.org
 Sent: Thursday, November 22, 2012 11:18:48 PM
 Subject: Re: [Engine-devel] Report  vNic Implementation Details
 
 On 11/22/2012 08:40 PM, Simon Grinberg wrote:
  Back to the original question:
 
  What is most inconvenient for many users is:
  1. The name we provide while creating the VNIC differs from the one
  in the guest
  2. No correlation between IP and NIC
 
  The current page covers for this but indeed as raised here does not
  cover what happens if this information is not easy to retrieve due
  to non strait forward configurations.
 
  What I suggest is,
 
  For the short term:
  - Agent to report the MACs, IPs and interface names if can be
  found, engine to match these to the existing and show
  Name In Engine| Name in guest | MAC | IP  etc like the current
  feature page, but...
 
  - If a match could not be found then just report Name in Engine N/A
  and then the rest and keep it in dynamic only table.
  This is useful for NICs created by hooks, advanced topology that we
  can't support ATM etc.
 
  *The above does require the agent to at least match MAC to IP.
 
 
  Long term: The agent to report a topology the same as vdsm does
  (use same code at least for Linux guests?) and present it similar
  to what we present in the host tab. In most cases this will
  collapse to the short term.
 
  MTU, is good to have in any of the two if we can get it.
 
  More?
 
 I don't think the guest agent ip information should be correlated to
 the
 vnic engine information at rest api level.
 the vm (and vnic) api provides the authoritative configuration
 information of the guest as defined in the engine.
 I don't think we should 'taint' it with untrusted data from the
 guest.
 it would make sense to place there IPs configured/allocated by engine
 when we deal with ip allocation though.
 
Agree that there should be a clear distinction between guest data and 
configuration. 
Never the less you need to report the guest data and correlate where you can to 
the configuration.

Where and how you present this correlation is a different matter and should be 
discussed.

 
 i.e., the guest info element in the rest api provides good separation
 between engine level data, and guest agent data.
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] Report vNic Implementation Details

2012-11-23 Thread Simon Grinberg


- Original Message -
 From: Livnat Peer lp...@redhat.com
 To: Dan Kenigsberg dan...@redhat.com
 Cc: Itamar Heim ih...@redhat.com, Simon Grinberg si...@redhat.com, 
 engine-devel@ovirt.org
 Sent: Friday, November 23, 2012 9:02:09 AM
 Subject: Re: [Engine-devel] Report  vNic Implementation Details
 
 On 22/11/12 13:59, Dan Kenigsberg wrote:
  On Thu, Nov 22, 2012 at 09:55:43AM +0200, Livnat Peer wrote:
  On 22/11/12 00:02, Itamar Heim wrote:
  On 11/21/2012 10:53 PM, Andrew Cathrow wrote:
 
 
  - Original Message -
  From: Moti Asayag masa...@redhat.com
  To: engine-devel@ovirt.org
  Sent: Wednesday, November 21, 2012 12:36:58 PM
  Subject: [Engine-devel] Report  vNic Implementation Details
 
  Hi all,
 
  This is a proposal for reporting the vNic implementation
  details as
  reported by the guest agent per each vNic.
 
  Please review the wiki and add your comments.
 
 
  While we're making the change is there anything else we'd want
  to
  report - MTU, MAC (since a user might try to override), etc ?
 
  http://wiki.ovirt.org/wiki/Feature/ReportingVnicImplementation
 
  iirc, the fact ip addresses appear under guest info in the api
  and not
  under the vnic was a modeling decision.
  for example, what if guest is using a bridge, or a bond (yes,
  sounds
  unlikely, but the point is it may be incorrect to assume ip-vnic
  relation.
  michael - do you remember the details of that discussion?
 
  I'd love to know what drove this modeling decision.
  The use case above is not a typical use case.
  We know we won't be able to present the guest internal network
  configuration accurately in some scenarios but if we cover 90% of
  the
  use cases correctly I think we should not let perfect be the enemy
  of
  (very) good ;)
  
  We do not support this yet, but once we have nested virtualization,
  it
  won't be that odd to have a bridge or bond in the guest. I know
  that we
  already have users with in-guest vlan devices.
  
 
 The fact that it's not odd does not mean it is common.., which was
 the
 point I was trying to make.
 I agree that we should be able to accommodate such info, not sure
 that
 it is required at this point.


At this point what you need to make sure is that if it does exit you do not 
crash due to unexpected data. 
Don't forget that using a hook you may create nested virtualization today. 
 
 
 
  I think that the api should accomodate these configurations - even
  if we
  do not report them right away. The guest agent already reports
  (some) of
  the information:
  
 
 Which API are you referring to? if you are talking about VDSM-Engine
 API
 we do not change it, only use what is already reported by the GA. I
 don't think we should change the API for future support...
 
  The Guest Agent reports the vNic details:
 
  IP addresses (both IPv4 and IPv6).
  vNic internal name
  
  Actually, the guest agent reports all in-guest network device.
  vNics (and bonds
  and vlans) included.
 
 true, but AFAIU we won't be able to build the networking topology of
 the
 guest from this report. For example if the guest agent reports a
 bridge
 it does not say to which interface it is connected to etc.

Actually in most cased it does since by default the bridge will get the 
interface IP. 
Well not that accurate, the bridge gets the lowest IP the of all the devices it 
is connected to, however there was a fix in libvirt some time back to make sure 
the external interface IP will be the one the bride gets by providing the 
tap/tun devices connected to the bridge a MAC address starting with 'FE'

 
 
 
  The retrieved information by the Guest Agent should be reported to
  the ovirt-engine and to be viewed by its client
  Today only the IPv4 addresses are reported to the User, kept on VM
  level. This feature will maintain the information on vNic level.
  
  I think we should report the information on the vNic level only
  when we
  can. If we have a vlan device, which we do not know how tie to a
  specific vNic, we'd better report its IP address on the VM level.
 
 If I understand you correctly you are suggesting to keep the IP
 address
 property also on the VM level and for devices with reported
 IP-address
 which the engine can not correlate to a VNIC hold it on this VM-level
 property.
 
 My concern is that in the UI we are currently displaying in the main
 greed the VM IP and on the network sub-tab the IP per vNIC.
 If we choose to hold the IP addresses the engine does not correlate
 on
 the VM level they become the more visible addresses for the users
 which
 I am not sure is what we want.
 
 What I suggest is to add a property to VM that says network devices
 and
 hold the GA report as a 'blob'. we can display this info in the API
 on
 the VM level and in the UI maybe display it on the general sub-tab or
 add a dialog on the network subtab.

I think you should display it as correlated as possible - since I agree with 
you that the common case is an IP set directly

Re: [Engine-devel] Network Wiring

2012-11-18 Thread Simon Grinberg


- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: Alona Kaplan alkap...@redhat.com
 Cc: Simon Grinberg si...@redhat.com, engine-devel@ovirt.org
 Sent: Sunday, November 18, 2012 1:12:05 PM
 Subject: Re: [Engine-devel] Network Wiring
 
 On Sun, Nov 18, 2012 at 05:01:30AM -0500, Alona Kaplan wrote:
  
 
 snip
 
purge a network while it is connected to VMs: Link-Down on all
nics
and connect to the empty/no network. (Yes I know, it's not par
of
the
feature, but you know someone will ask for it soon :))
   
   It should not be hard to implement; In
   http://wiki.ovirt.org/wiki/Feature/DetailedNetworkWiring#New_API
   I
   suggest passing
   no 'network' element to mean connected to nothing.
  
  I don't really understand why changing the link state to down is
  not enough?
  What is the added value of connecting unwired nic to a none
  network?
 
 It is not a big deal of a difference, but the semantics of having no
 network is clear: you can run the VM if networks are missing, you can
 remove a network when the VM is running. When a VM is associated to a
 network, but its link state is down, the right semantics is more
 vague.

Indeed :)

Plus consider the use case of hooks providing the networking - they still need 
the engine to assign the MAC and type (like the CISCO hook).
If you force a logical network on each nic, it means you have to invent a dummy 
LN and define it as non-required and set the global config to allow VMs to run 
on hosts that do not have this networks - Too messy. Though sometimes desirable 
since the network name may be a hint to the hook, there are cases it's not.  

- No LN means this VM can run on any host! with implicit assumption that 
someone else takes care of connecting it to the proper network.

Note that in this case you may still want the network with link state up and be 
allowed to bring the link up/down so it's for sure not the case as 
'unwired/link down but connected to arbitrary network'  



 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] SPICE IP override

2012-11-15 Thread Simon Grinberg


- Original Message -
 From: Yaniv Kaul yk...@redhat.com
 To: Itamar Heim ih...@redhat.com
 Cc: Simon Grinberg si...@redhat.com, engine-devel@ovirt.org
 Sent: Thursday, November 15, 2012 10:07:02 AM
 Subject: Re: [Engine-devel] SPICE IP override
 
 On 11/15/2012 09:35 AM, Itamar Heim wrote:
  On 11/15/2012 09:06 AM, Yaniv Kaul wrote:
  - Original Message -
  On 11/15/2012 08:33 AM, Yaniv Kaul wrote:
  On 11/15/2012 06:10 AM, Itamar Heim wrote:
  On 11/11/2012 11:45 AM, Yaniv Kaul wrote:
  On 11/07/2012 10:52 AM, Simon Grinberg wrote:
 
  - Original Message -
  From: Michal Skrivanekmichal.skriva...@redhat.com
  To:engine-devel@ovirt.org
  Sent: Tuesday, November 6, 2012 10:39:58 PM
  Subject: [Engine-devel] SPICE IP override
 
  Hi all,
  On behalf of Tomas - please check out the proposal for
  enhancing our
  SPICE integration to allow to return a custom IP/FQDN
  instead
  of the
  host IP address.
  http://wiki.ovirt.org/wiki/Features/Display_Address_Override
  All comments are welcome...
  My 2 cents,
 
  This works under the assumption that all the users are either
  outside of the organization or inside.
  But think of some of the following scenarios based on a
  topology
  where users in the main office are inside the corporate
  network
  while users on remote offices / WAN are on a detached
  different
  network on the other side of the NAT / public firewall :
 
  With current 'per host override' proposal:
  1. Admin from the main office won't be able to access the VM
  console
  2. No Mixed environment, meaning that you have to have
  designated
  clusters for remote offices users vs main office users -
  otherwise
  connectivity to the console is determined based on scheduler
  decision, or may break by live migration.
  3. Based on #2, If I'm a user travelling between offices I'll
  have
  to ask the admin to turn off my VM and move it to internal
  cluster
  before I can reconnect
 
  My suggestion is to covert this to 'alternative' IP/FQDN
  sending
  the
  spice client both internal fqdn/ip and the alternative. The
  spice
  client should detect which is available of the two and
  auto-connect.
 
  This requires enhancement of the spice client, but still
  solves
  all
  the issues raised above (actually it solves about 90% of the
  use
  cases I've heard about in the past).
 
  Another alternative is for the engine to 'guess' or 'elect'
  which to
  use, alternative or main, based on the IP of the client -
  meaning
  admin provides the client ranges for providing internal host
  address
  vs alternative - but this is more complicated compared for
  the
  previous suggestion
 
  Thoughts?
 
  Lets not re-invent the wheel. This problem has been pondered
  before and
  solved[1], for all scenarios:
  internal clients connecting to internal resources;
  internal clients connecting to external resources, without the
  need for
  any intermediate assistance
  external clients connecting to internal resources, with the
  need
  for
  intermediate assistance.
  VPN clients connecting to internal resources, with or without
  an
  internal IP.
 
  Any other solution you'll try to come up with will bring you
  back
  to
  this standard, well known (along with its faults) method.
 
  The browser client will use PAC to determine how to connect to
  the hosts
  and will deliver this to the client. It's also a good path
  towards real
  proxy support for Spice.
  (Regardless, we still need to deal with the Spice protocol's
  migration
  command of course).
 
 
  [1] http://en.wikipedia.org/wiki/Proxy_auto-config
 
  so instead of a spice proxy fqdn field, we should just allow
  user
  to
  specify a pac file which resides under something like
  /etc/ovirt/engine/pac...?
 
  I would actually encourage the customers to use their own
  corporate
  PAC
  and add the information to it.
 
  so you are suggesting that there is no need at all to deal with
  proxy
  definition/configuration at ovirt engine/user portal level?
 
  I expect the admin/user portal to send the result of the PAC
  processing to the Spice client.
  I don't think the Spice client should execute the PAC (it's a
  Javascript...).

And live migration? 
I don't completely understand how you can avoid executing the PAC file if the 
destination host is provided by Qemu (client_migrate_info) unless I'm confusing 
with something else and it is the web client that delivers this info on 
migration.

P.S., 
If it is Qemu, then I don't see the current feature page accounting for that - 
IE, the hosts should also be informed on this override IP 



 
  ok, so no engine, but just client side support for PAC?
 
 Exactly.
 And of course, Spice protocol changes, without which all this effort
 is
 nice, but incomplete.
 Y.
 
 
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel

Re: [Engine-devel] SPICE IP override

2012-11-15 Thread Simon Grinberg


- Original Message -
 From: Yaniv Kaul yk...@redhat.com
 To: Simon Grinberg si...@redhat.com
 Cc: engine-devel@ovirt.org, Itamar Heim ih...@redhat.com
 Sent: Thursday, November 15, 2012 10:42:19 AM
 Subject: Re: [Engine-devel] SPICE IP override
 
 On 11/15/2012 10:33 AM, Simon Grinberg wrote:
 
  - Original Message -
  From: Yaniv Kaul yk...@redhat.com
  To: Itamar Heim ih...@redhat.com
  Cc: Simon Grinberg si...@redhat.com, engine-devel@ovirt.org
  Sent: Thursday, November 15, 2012 10:07:02 AM
  Subject: Re: [Engine-devel] SPICE IP override
 
  On 11/15/2012 09:35 AM, Itamar Heim wrote:
  On 11/15/2012 09:06 AM, Yaniv Kaul wrote:
  - Original Message -
  On 11/15/2012 08:33 AM, Yaniv Kaul wrote:
  On 11/15/2012 06:10 AM, Itamar Heim wrote:
  On 11/11/2012 11:45 AM, Yaniv Kaul wrote:
  On 11/07/2012 10:52 AM, Simon Grinberg wrote:
  - Original Message -
  From: Michal Skrivanekmichal.skriva...@redhat.com
  To:engine-devel@ovirt.org
  Sent: Tuesday, November 6, 2012 10:39:58 PM
  Subject: [Engine-devel] SPICE IP override
 
  Hi all,
  On behalf of Tomas - please check out the proposal for
  enhancing our
  SPICE integration to allow to return a custom IP/FQDN
  instead
  of the
  host IP address.
  http://wiki.ovirt.org/wiki/Features/Display_Address_Override
  All comments are welcome...
  My 2 cents,
 
  This works under the assumption that all the users are
  either
  outside of the organization or inside.
  But think of some of the following scenarios based on a
  topology
  where users in the main office are inside the corporate
  network
  while users on remote offices / WAN are on a detached
  different
  network on the other side of the NAT / public firewall :
 
  With current 'per host override' proposal:
  1. Admin from the main office won't be able to access the
  VM
  console
  2. No Mixed environment, meaning that you have to have
  designated
  clusters for remote offices users vs main office users -
  otherwise
  connectivity to the console is determined based on
  scheduler
  decision, or may break by live migration.
  3. Based on #2, If I'm a user travelling between offices
  I'll
  have
  to ask the admin to turn off my VM and move it to internal
  cluster
  before I can reconnect
 
  My suggestion is to covert this to 'alternative' IP/FQDN
  sending
  the
  spice client both internal fqdn/ip and the alternative. The
  spice
  client should detect which is available of the two and
  auto-connect.
 
  This requires enhancement of the spice client, but still
  solves
  all
  the issues raised above (actually it solves about 90% of
  the
  use
  cases I've heard about in the past).
 
  Another alternative is for the engine to 'guess' or 'elect'
  which to
  use, alternative or main, based on the IP of the client -
  meaning
  admin provides the client ranges for providing internal
  host
  address
  vs alternative - but this is more complicated compared for
  the
  previous suggestion
 
  Thoughts?
  Lets not re-invent the wheel. This problem has been pondered
  before and
  solved[1], for all scenarios:
  internal clients connecting to internal resources;
  internal clients connecting to external resources, without
  the
  need for
  any intermediate assistance
  external clients connecting to internal resources, with the
  need
  for
  intermediate assistance.
  VPN clients connecting to internal resources, with or
  without
  an
  internal IP.
 
  Any other solution you'll try to come up with will bring you
  back
  to
  this standard, well known (along with its faults) method.
 
  The browser client will use PAC to determine how to connect
  to
  the hosts
  and will deliver this to the client. It's also a good path
  towards real
  proxy support for Spice.
  (Regardless, we still need to deal with the Spice protocol's
  migration
  command of course).
 
 
  [1] http://en.wikipedia.org/wiki/Proxy_auto-config
  so instead of a spice proxy fqdn field, we should just allow
  user
  to
  specify a pac file which resides under something like
  /etc/ovirt/engine/pac...?
  I would actually encourage the customers to use their own
  corporate
  PAC
  and add the information to it.
  so you are suggesting that there is no need at all to deal with
  proxy
  definition/configuration at ovirt engine/user portal level?
  I expect the admin/user portal to send the result of the PAC
  processing to the Spice client.
  I don't think the Spice client should execute the PAC (it's a
  Javascript...).
  And live migration?
 
 Read my email: And of course, Spice protocol changes
 
  I don't completely understand how you can avoid executing the PAC
  file if the destination host is provided by Qemu
  (client_migrate_info) unless I'm confusing with something else and
  it is the web client that delivers this info on migration.
 
 I'm not against executing the PAC. It just requires a javascript
 engine,
 which is a bit of an overkill for Spice client to start working

Re: [Engine-devel] Network Wiring

2012-11-15 Thread Simon Grinberg


- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: Yaniv Kaul yk...@redhat.com
 Cc: Simon Grinberg si...@redhat.com, engine-devel@ovirt.org
 Sent: Thursday, November 15, 2012 3:36:02 PM
 Subject: Re: [Engine-devel] Network Wiring
 
 On Thu, Nov 15, 2012 at 02:43:23PM +0200, Yaniv Kaul wrote:
  On 11/15/2012 02:33 PM, Itamar Heim wrote:
  On 11/15/2012 02:29 PM, Dan Kenigsberg wrote:
  On Wed, Nov 14, 2012 at 02:12:07PM -0500, Simon Grinberg wrote:
  
  The intention is to use the new API
  VDSM.libvirtVm.updateVmInteface
  for
  performing the network rewire in a single command.
  
  What does it do? I could not find updateVmInteface in vdsm git.
  Where is this defined?
  
  It's critical.
  
  - You can change the interface directly by moving the VM from
  one network to another
  - You can do that but toggle the ling state so the VM will be
  aware.
  
  Which if these two?
  If you do only the first then it's not the common use case. In
  most cases you must toggle the link status to the VM.
  This will cause:
  1. Speed negotiation + arp request that also informs the
  switched about the change
  2. In case it's DHCP (which most likely the case for guests)
  it will trigger new DHCP request.
  
  If you don't baaad things will happen :)
  
  I think that bad things are going to happen anyway. In
  bad
  things, I mean stuff that require guest intervension.
  
  The guest may be moved from one subnet into another one, maybe on
  different VLAN or physical LAN. We can not expect that the
  applications
  running on it will be happy about these changes. A similar case
  appears
  if we rewire the network while the VM is down (or hibernated).
  When the
  VM is restarted, it is going to use mismatching IP addresses.
  
  You are right that it may make sense to request an new IP address
  after
  the rewiring, however, a little test I just did on my desktop
  showed that
  dhclient does not initiate a new request just because the carrier
  dropped for few seconds. So we should try harder if we want to
  refresh
  dhcp after rewiring: I think that it would be cool to have a
  guest agent verb
  that does it.
  
  Blame your OS if it doesn't do media sensing at all (or correctly).
 
 Media is sensed:
 
 Nov 15 14:15:46 kernel: [3379655.213183] e1000e: eth0 NIC Link is
 Down
 Nov 15 14:15:52 kernel: [3379661.265946] e1000e: eth0 NIC Link is Up
 100 Mbps Full Duplex, Flow Control: None
 Nov 15 14:15:52 kernel: [3379661.265951] e1000e :00:19.0: eth0:
 10/100 speed: disabling TSO

If you go trough link state toggle then I think it should be enough.

You are right, lot's of things get go wrong when you move a VM from network to 
network, but at least we did inform the OS, and the switches in the new network 
will now know there is a new MAC, the rest will have to wait until (and if) 
will support guest IP stetting. 


 
 but dhcp does not cancel its leases due to this. And I would not
 expect it to:
 my dhcp server could change without carrier loss (e.g. vlan change on
 my
 nearest switch, or dhcp reconfiguration)
 
  
  
  shouldn't this simulate a link disconnect/connect event to the OS?
  
  I sincerely hope it does.
 
 Itamar, what is this? Setting link state to down does just that.
 
 I was suggesting a guest agent verb that clears all pending dhcp
 leases after
 the guest is connected again.
 
 Dan.
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] Network Wiring

2012-11-13 Thread Simon Grinberg
From the summary:
...It supports the following actions without unplugging the Vnic, and it 
maintains the device address of the Vnic 

But in the dialogue section:
If the Vnic is plugged there should be a message on top of the dialog Please 
notice, changing Type or MAC will cause unplugging and plugging the Vnic 

Looking at the detailed design indeed any change indeed goes through 
plug/unplug.
Please correct me if I got the above wrong. 

To support real live rewire == Move a card from one network to another 
The sequence should be for wired-plugged card: 
- Unwire
- Change network 
- Rewire 

I would argue that we should actually force the user to perform these steps, 
but we can do it in one go.

Any other state may change network freely.

To change name - it's just DB, so any state goes 

To change type or MAC address (= property), must go through unplug regardless 
to the wired state 
So:
- Unplug 
- Change property 
- Plug 

Again should probably ask the user to do these 3 steps so he'll know what he is 
doing, but we can do it for him with proper warning.

I also wander I do we have to drop the PCI address in the persisted table in 
this case - loosing the PCI location is redundant and will cause a move to 
another eth0 number in the guest. On the other hand changing of MAC may break 
network scripts anyhow - so I don't have a strong argument to keep it. 


Another issue:
If the nic is there to be use by a hook, then you probably want to allow 'none' 
network.
This may also be useful when allowing to purge a network while it is connected 
to VMs: unwire on all nics and connect to the none network.


Overall, looking great, and I like the wired vs unplugged that emulate real 
behavior. 

Regards, 
Simon


 







- Original Message -
 From: Alona Kaplan alkap...@redhat.com
 To: engine-devel@ovirt.org, Simon Grinberg sgrin...@redhat.com, 
 rhevm-qe-netw...@redhat.com
 Sent: Tuesday, November 13, 2012 4:46:52 PM
 Subject: Network Wiring
 
 Hi all,
 
 Please review the wiki and add your comments.
 
 http://wiki.ovirt.org/wiki/Feature/NetworkWiring
 
 
 Thanks,
 Alona.
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy selection algorithm for Power Management operations

2012-11-12 Thread Simon Grinberg


- Original Message -
 From: Simon Grinberg si...@redhat.com
 To: Itamar Heim ih...@redhat.com
 Cc: Eli Mesika emes...@redhat.com, engine-devel engine-devel@ovirt.org
 Sent: Sunday, November 11, 2012 11:22:29 PM
 Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy selection 
 algorithm for Power Management operations
 
 
 
 - Original Message -
  From: Itamar Heim ih...@redhat.com
  To: Simon Grinberg si...@redhat.com
  Cc: Eli Mesika emes...@redhat.com, engine-devel
  engine-devel@ovirt.org
  Sent: Sunday, November 11, 2012 10:52:53 PM
  Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy
  selection algorithm for Power Management operations
  
  On 11/11/2012 05:45 PM, Simon Grinberg wrote:
   3. The directly selected hosts comes to accommodate two use
   cases:
   -3.1- Switch failure - if the fence network for hosts in a
   DC/Cluster have to split between two switches. Then you will
   prefer to use hosts that are for sure on the other switch
   -3.2- Legacy clusters merged into larger clusters due to a
   move
   to oVirt then the infrastructural may still fit to the legacy
   connectivity - lot's of firewalls rules or direct connections
   that limit access to fencing devices to specific hosts.
   -3.3- Clustered applications within the VMs, you only want
   your
   peers to be allowed to fence you. This is limited for VMs
   running on specific host group (affinity management that we
   don't have yet, but we can lock VMs to specific hosts).
  
  that's VMs asking to fence (stop) other VMs, not hosts. why are you
  mixing it with host fencing?
 
 What happens if the host on which the peer VM is down?
 You need to fence the host. I was thinking about preventing a race
 where the VM asks to fence it's peer while the engine fences the
 host. In this case the fence of the peer VM may be reported as
 failed (no option to send stop to the VM) while the host status is
 yet unknown, or worse may succeed after the host rebooted killing
 the VM again after it restarted.
 
 To prevent that you request to fence the host instead of fencing the
 VM a. But you are right that it does not matter which host will do
 the fencing, I was thinking on the old stile infra.
 
 
  
  
   Note that the above was not meant to accommodate any random
   server, just hosts in the setup, hosts that already run VDSM.
   Meaning that maybe instead of the FQDN we can just use
   hostname
   - so the UUID will be registered in the tables
   I don't why it's so complex, if a host provided is removed
   from
   the system you either get a canDoAction to remove it from the
   configuration as well (or a warning that this will remove the
   host from the fencing configuration). Your only risk if all
   of
   them are removed, then you need to set the exclamation mark
   again (power management is not configured for this host)
  
  because this was a text field, and i don't like code having to know
  to
  check some obscure field and parse it for dependencies.
  relations between entities are supposed to be via db referential
  integrity if possible (we had some locking issues with these).
  i prefer implementation will start with the more simple use case
  not
  covering these complexities.
  
  
   - 5. Thinking about it more, Though the chain is more generic and
   flexible, I would like to return to my original suggestion, of
   having just primary and secondary proxy:
 Primary Proxy 1 = Drop down - Any cluster host / Any DC
 host / RHEV Manager / Named host out of the list of all the
 hosts
 Secondary Proxy 2 = Drop down - Any cluster host / Any DC
 host / RHEV Manager / Named host out of the list of all the
 hosts
 I think is simpler as far as a user is concerned and it's
 simpler for us to implement two fields single value in
 each.
 And I don't believe we really need more, even in the simple
 case of cluster only hosts, for clusters larger then 4
 hosts
 by the time you get to the secondary it may be too late.
 Secondary is more critical for the 'Named host' option or
 small clusters.
  
  this is a bit simpler. but as for specifying a specific host:
  - now you are asking to check two fields (proxy1, proxy2)
  - probably to also alert if all these hosts moved to maint, or when
 moving them to another cluster, etc.
  - it doesn't cover the use case of splitting between switches, sub
  clusters, etc. as you are limited to two hosts, which may have been
  moved to maint/shutdown for power saving, etc. (since you are using
  a
  static host assignment, rather than an implied group of hosts
  (cluster,
  dc, engine)
 
 Are you offering to allow defining hosts-groups? :). I'll be happy if
 you do, we really need that for some cases of the affinity feature.
 Especially

Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy selection algorithm for Power Management operations

2012-11-12 Thread Simon Grinberg


- Original Message -
 From: Dan Kenigsberg dan...@redhat.com
 To: Eli Mesika emes...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org
 Sent: Monday, November 12, 2012 11:47:14 AM
 Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy selection 
 algorithm for Power Management operations
 
 On Sun, Nov 11, 2012 at 06:18:53AM -0500, Eli Mesika wrote:
  
  
  - Original Message -
   From: Eli Mesika emes...@redhat.com
   To: Itamar Heim ih...@redhat.com
   Cc: engine-devel engine-devel@ovirt.org
   Sent: Friday, November 9, 2012 12:06:05 PM
   Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy
   selection algorithm for Power Management operations
   
   
   
   - Original Message -
From: Itamar Heim ih...@redhat.com
To: Eli Mesika emes...@redhat.com
Cc: engine-devel engine-devel@ovirt.org, Michael
Pasternak
mpast...@redhat.com, Simon Grinberg
sgrin...@redhat.com, Dan Kenigsberg dan...@redhat.com
Sent: Friday, November 9, 2012 12:02:37 PM
Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving
proxy
selection algorithm for Power Management operations

On 11/09/2012 10:52 AM, Eli Mesika wrote:

 
FenceWrapper
 
 i understand danken suggested going this way, rather than
 than
 another
 instance of vdsm.
 is vdsm only calling these scripts today and all logic is
 in
 engine,
 or
 does vdsm has any logic in wrapping these scripts (not a
 blocker
 to
 doing FenceWrapper, just worth extracting that logic from
 vdsm
 to
 such a
 script, then using it in both. i hope answer is 'no
 logic'...)
 vdsm has some logic that maps between the call passed to it
 from
 engine and the actual parameters generated for the script.
 AFAIK, this logic only builds the correct arguments for the
 command according to the agent type


can we extract it to an external wrapper?
I'd hate to fix bugs/changes twice for this.
   
   I'll check it with danken on SUN
  
  Well, looked at it a bit , the VDSM code is in fenceNote function
  in API.py
  What I think is that we can exclude the fenceNote implementation to
  a separate fence.py file and call it from the API.py
  Then we can use one of the following in Java to call the method
  from fence.py
  1) jython
  2) org.python.util.PythonInterpreter
  
  See
  http://stackoverflow.com/questions/8898765/calling-python-in-java
  
  danken, what do you think ?
 
 BTW, no one has promised the the fence script is implemented in
 Python
 
 $ file `which fence_ipmilan `
 /usr/sbin/fence_ipmilan: ELF 64-bit LSB executable...

PS, if it's really that complex I don't see the a big issue dropping engine 
fence
It is mostly useful when you have small number of hosts, or collection of small 
clusters where the admin limits the hosts that are allowed to fence to cluster 
hosts and as a failsafe the 'engine' 

*It does however solves at the same time the issue that we (still) can't 
'Approve a host have been rebooted' if it's the last host in the DC since the 
path goes through the fencing logic. 


 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy selection algorithm for Power Management operations

2012-11-12 Thread Simon Grinberg


- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Simon Grinberg si...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org
 Sent: Monday, November 12, 2012 12:21:57 PM
 Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy selection 
 algorithm for Power Management operations
 
 On 11/12/2012 12:01 PM, Simon Grinberg wrote:
 
 
  - Original Message -
  From: Dan Kenigsberg dan...@redhat.com
  To: Eli Mesika emes...@redhat.com
  Cc: engine-devel engine-devel@ovirt.org
  Sent: Monday, November 12, 2012 11:47:14 AM
  Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy
  selection algorithm for Power Management operations
 
  On Sun, Nov 11, 2012 at 06:18:53AM -0500, Eli Mesika wrote:
 
 
  - Original Message -
  From: Eli Mesika emes...@redhat.com
  To: Itamar Heim ih...@redhat.com
  Cc: engine-devel engine-devel@ovirt.org
  Sent: Friday, November 9, 2012 12:06:05 PM
  Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy
  selection algorithm for Power Management operations
 
 
 
  - Original Message -
  From: Itamar Heim ih...@redhat.com
  To: Eli Mesika emes...@redhat.com
  Cc: engine-devel engine-devel@ovirt.org, Michael
  Pasternak
  mpast...@redhat.com, Simon Grinberg
  sgrin...@redhat.com, Dan Kenigsberg dan...@redhat.com
  Sent: Friday, November 9, 2012 12:02:37 PM
  Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving
  proxy
  selection algorithm for Power Management operations
 
  On 11/09/2012 10:52 AM, Eli Mesika wrote:
 
 
 FenceWrapper
 
  i understand danken suggested going this way, rather than
  than
  another
  instance of vdsm.
  is vdsm only calling these scripts today and all logic is
  in
  engine,
  or
  does vdsm has any logic in wrapping these scripts (not a
  blocker
  to
  doing FenceWrapper, just worth extracting that logic from
  vdsm
  to
  such a
  script, then using it in both. i hope answer is 'no
  logic'...)
  vdsm has some logic that maps between the call passed to it
  from
  engine and the actual parameters generated for the script.
  AFAIK, this logic only builds the correct arguments for the
  command according to the agent type
 
 
  can we extract it to an external wrapper?
  I'd hate to fix bugs/changes twice for this.
 
  I'll check it with danken on SUN
 
  Well, looked at it a bit , the VDSM code is in fenceNote function
  in API.py
  What I think is that we can exclude the fenceNote implementation
  to
  a separate fence.py file and call it from the API.py
  Then we can use one of the following in Java to call the method
  from fence.py
  1) jython
  2) org.python.util.PythonInterpreter
 
  See
  http://stackoverflow.com/questions/8898765/calling-python-in-java
 
  danken, what do you think ?
 
  BTW, no one has promised the the fence script is implemented in
  Python
 
  $ file `which fence_ipmilan `
  /usr/sbin/fence_ipmilan: ELF 64-bit LSB executable...
 
  PS, if it's really that complex I don't see the a big issue
  dropping engine fence
  It is mostly useful when you have small number of hosts, or
  collection of small clusters where the admin limits the hosts that
  are allowed to fence to cluster hosts and as a failsafe the
  'engine'
 
  *It does however solves at the same time the issue that we (still)
  can't 'Approve a host have been rebooted' if it's the last host in
  the DC since the path goes through the fencing logic.
 
 exactly, we need to allow engine fence to solve the single/last host
 private case.

Indeed 

 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy selection algorithm for Power Management operations

2012-11-11 Thread Simon Grinberg
Trying to answer open questions + provide feedback

-1. We need to change the term Power Management, we only do fencing here so why 
not call it by name, it may confuse with real power modes management that we'll 
probably do via VDSM and not via OOB management. Especially as some of the 
devices external to the host can only do fencing anyhow. 

I'll change the requirement page, to reflect that + I'll split the proxy from 
dual card support, as the design should. 


-2. Default value should be 'cluster, dc, engine' not the other way around. 
Actually most users I've been talking to will just use 'cluster' since this 
matches the classic cluster definition where host could only be fenced by 
another host in the cluster. 

I'll change requirements to reflect that. 

-3. The directly selected hosts comes to accommodate two use cases:
   -3.1- Switch failure - if the fence network for hosts in a DC/Cluster have 
to split between two switches. Then you will prefer to use hosts that are for 
sure on the other switch
   -3.2- Legacy clusters merged into larger clusters due to a move to oVirt 
then the infrastructural may still fit to the legacy connectivity - lot's of 
firewalls rules or direct connections that limit access to fencing devices to 
specific hosts. 
   -3.3- Clustered applications within the VMs, you only want your pears to be 
allowed to fence you. This is limited for VMs running on specific host group 
(affinity management that we don't have yet, but we can lock VMs to specific 
hosts).

   Note that the above was not meant to accommodate any random server, just 
hosts in the setup, hosts that already run VDSM.
   Meaning that maybe instead of the FQDN we can just use hostname - so the 
UUID will be registered in the tables
   I don't why it's so complex, if a host provided is removed from the system 
you either get a canDoAction to remove it from the configuration as well (or a 
warning that this will remove the host from the fencing configuration). Your 
only risk if all of them are removed, then you need to set the exclamation mark 
again (power management is not configured for this host)

- 4. Assumption that every host will have all elements is wrong. In the 
requirement page I've gave combinations where it isn't. 
   Like said there are use cases where you don't want to diverge from hosts in 
the same cluster. Reason is that if the last host in the cluster (assuming 
clustered VMs running on this host) you may actually prefer it won't be fenced. 
Similar to -3.3-

- 5. Thinking about it more, Though the chain is more generic and flexible, I 
would like to return to my original suggestion, of having just primary and 
secondary proxy:
 Primary Proxy 1 = Drop down - Any cluster host / Any DC host / RHEV 
Manager / Named host out of the list of all the hosts 
 Secondary Proxy 2 = Drop down - Any cluster host / Any DC host / RHEV 
Manager / Named host out of the list of all the hosts
 I think is simpler as far as a user is concerned and it's simpler for us 
to implement two fields single value in each. And I don't believe we really 
need more, even in the simple case of cluster only hosts, for clusters larger 
then 4 hosts by the time you get to the secondary it may be too late. Secondary 
is more critical for the 'Named host' option or small clusters. 

I'll look at it some more later today, but sending now to get as much feedback 
as possible.

Regards,
Simon
 

- Original Message -
 From: Eli Mesika emes...@redhat.com
 To: Dan Kenigsberg dan...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org
 Sent: Sunday, November 11, 2012 1:18:53 PM
 Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy selection 
 algorithm for Power Management operations
 
 
 
 - Original Message -
  From: Eli Mesika emes...@redhat.com
  To: Itamar Heim ih...@redhat.com
  Cc: engine-devel engine-devel@ovirt.org
  Sent: Friday, November 9, 2012 12:06:05 PM
  Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy
  selection algorithm for Power Management operations
  
  
  
  - Original Message -
   From: Itamar Heim ih...@redhat.com
   To: Eli Mesika emes...@redhat.com
   Cc: engine-devel engine-devel@ovirt.org, Michael Pasternak
   mpast...@redhat.com, Simon Grinberg
   sgrin...@redhat.com, Dan Kenigsberg dan...@redhat.com
   Sent: Friday, November 9, 2012 12:02:37 PM
   Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy
   selection algorithm for Power Management operations
   
   On 11/09/2012 10:52 AM, Eli Mesika wrote:
   

   FenceWrapper

i understand danken suggested going this way, rather than
than
another
instance of vdsm.
is vdsm only calling these scripts today and all logic is in
engine,
or
does vdsm has any logic in wrapping these scripts (not a
blocker
to
doing FenceWrapper, just worth extracting that logic from
vdsm
to
such a
script, then using it in both. i

Re: [Engine-devel] SPICE IP override

2012-11-11 Thread Simon Grinberg

- Original Message - 

 From: Yaniv Kaul yk...@redhat.com
 To: Simon Grinberg si...@redhat.com, Michal Skrivanek
 michal.skriva...@redhat.com
 Cc: engine-devel@ovirt.org
 Sent: Sunday, November 11, 2012 11:45:34 AM
 Subject: Re: [Engine-devel] SPICE IP override

 On 11/07/2012 10:52 AM, Simon Grinberg wrote:

  - Original Message -
 
   From: Michal Skrivanek michal.skriva...@redhat.com To:
   engine-devel@ovirt.org Sent: Tuesday, November 6, 2012 10:39:58
   PM
  
 
   Subject: [Engine-devel] SPICE IP override
  
 

   Hi all,
  
 
   On behalf of Tomas - please check out the proposal for enhancing
   our
  
 
   SPICE integration to allow to return a custom IP/FQDN instead of
   the
  
 
   host IP address.
   http://wiki.ovirt.org/wiki/Features/Display_Address_Override All
   comments are welcome...
  
 
  My 2 cents,
 

  This works under the assumption that all the users are either
  outside
  of the organization or inside.
 
  But think of some of the following scenarios based on a topology
  where users in the main office are inside the corporate network
  while users on remote offices / WAN are on a detached different
  network on the other side of the NAT / public firewall :
 

  With current 'per host override' proposal:
 
  1. Admin from the main office won't be able to access the VM
  console
 
  2. No Mixed environment, meaning that you have to have designated
  clusters for remote offices users vs main office users - otherwise
  connectivity to the console is determined based on scheduler
  decision, or may break by live migration.
 
  3. Based on #2, If I'm a user travelling between offices I'll have
  to
  ask the admin to turn off my VM and move it to internal cluster
  before I can reconnect
 

  My suggestion is to covert this to 'alternative' IP/FQDN sending
  the
  spice client both internal fqdn/ip and the alternative. The spice
  client should detect which is available of the two and
  auto-connect.
 

  This requires enhancement of the spice client, but still solves all
  the issues raised above (actually it solves about 90% of the use
  cases I've heard about in the past).
 

  Another alternative is for the engine to 'guess' or 'elect' which
  to
  use, alternative or main, based on the IP of the client - meaning
  admin provides the client ranges for providing internal host
  address
  vs alternative - but this is more complicated compared for the
  previous suggestion
 

  Thoughts?
 
 Lets not re-invent the wheel. This problem has been pondered before
 and solved[1], for all scenarios:
 internal clients connecting to internal resources;
 internal clients connecting to external resources, without the need
 for any intermediate assistance
 external clients connecting to internal resources, with the need for
 intermediate assistance.
 VPN clients connecting to internal resources, with or without an
 internal IP.

 Any other solution you'll try to come up with will bring you back to
 this standard, well known (along with its faults) method.

 The browser client will use PAC to determine how to connect to the
 hosts and will deliver this to the client. It's also a good path
 towards real proxy support for Spice.
 (Regardless, we still need to deal with the Spice protocol's
 migration command of course).

 [1] http://en.wikipedia.org/wiki/Proxy_auto-confi

If I'm reading this correctly the engine should prepare a URL with a PAC script 
for the Spice client, where the spice client should connect basea on the info 
in the PAC file. It's still means that we need to provide both internal and 
external connection option (just through PAC file). 

Nice. 





  Simon.
 
   Thanks,
  
 
   michal
  
 

   ___
  
 
   Engine-devel mailing list Engine-devel@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/engine-devel
  
 
  ___
 
  Engine-devel mailing list Engine-devel@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/engine-devel
 
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy selection algorithm for Power Management operations

2012-11-11 Thread Simon Grinberg


- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Simon Grinberg si...@redhat.com
 Cc: Eli Mesika emes...@redhat.com, engine-devel engine-devel@ovirt.org
 Sent: Sunday, November 11, 2012 10:52:53 PM
 Subject: Re: [Engine-devel] [Design for 3.2 RFE] Improving proxy selection 
 algorithm for Power Management operations
 
 On 11/11/2012 05:45 PM, Simon Grinberg wrote:
  3. The directly selected hosts comes to accommodate two use cases:
  -3.1- Switch failure - if the fence network for hosts in a
  DC/Cluster have to split between two switches. Then you will
  prefer to use hosts that are for sure on the other switch
  -3.2- Legacy clusters merged into larger clusters due to a move
  to oVirt then the infrastructural may still fit to the legacy
  connectivity - lot's of firewalls rules or direct connections
  that limit access to fencing devices to specific hosts.
  -3.3- Clustered applications within the VMs, you only want your
  peers to be allowed to fence you. This is limited for VMs
  running on specific host group (affinity management that we
  don't have yet, but we can lock VMs to specific hosts).
 
 that's VMs asking to fence (stop) other VMs, not hosts. why are you
 mixing it with host fencing?

What happens if the host on which the peer VM is down?  
You need to fence the host. I was thinking about preventing a race where the VM 
asks to fence it's peer while the engine fences the host. In this case the 
fence of the peer VM may be reported as failed (no option to send stop to the 
VM) while the host status is yet unknown, or worse may succeed after the host 
rebooted killing the VM again after it restarted.

To prevent that you request to fence the host instead of fencing the VM a. But 
you are right that it does not matter which host will do the fencing, I was 
thinking on the old stile infra.


 
 
  Note that the above was not meant to accommodate any random
  server, just hosts in the setup, hosts that already run VDSM.
  Meaning that maybe instead of the FQDN we can just use hostname
  - so the UUID will be registered in the tables
  I don't why it's so complex, if a host provided is removed from
  the system you either get a canDoAction to remove it from the
  configuration as well (or a warning that this will remove the
  host from the fencing configuration). Your only risk if all of
  them are removed, then you need to set the exclamation mark
  again (power management is not configured for this host)
 
 because this was a text field, and i don't like code having to know
 to
 check some obscure field and parse it for dependencies.
 relations between entities are supposed to be via db referential
 integrity if possible (we had some locking issues with these).
 i prefer implementation will start with the more simple use case not
 covering these complexities.
 
 
  - 5. Thinking about it more, Though the chain is more generic and
  flexible, I would like to return to my original suggestion, of
  having just primary and secondary proxy:
Primary Proxy 1 = Drop down - Any cluster host / Any DC
host / RHEV Manager / Named host out of the list of all the
hosts
Secondary Proxy 2 = Drop down - Any cluster host / Any DC
host / RHEV Manager / Named host out of the list of all the
hosts
I think is simpler as far as a user is concerned and it's
simpler for us to implement two fields single value in each.
And I don't believe we really need more, even in the simple
case of cluster only hosts, for clusters larger then 4 hosts
by the time you get to the secondary it may be too late.
Secondary is more critical for the 'Named host' option or
small clusters.
 
 this is a bit simpler. but as for specifying a specific host:
 - now you are asking to check two fields (proxy1, proxy2)
 - probably to also alert if all these hosts moved to maint, or when
moving them to another cluster, etc.
 - it doesn't cover the use case of splitting between switches, sub
 clusters, etc. as you are limited to two hosts, which may have been
 moved to maint/shutdown for power saving, etc. (since you are using a
 static host assignment, rather than an implied group of hosts
 (cluster,
 dc, engine)

Are you offering to allow defining hosts-groups? :). I'll be happy if you do, 
we really need that for some cases of the affinity feature. Especially those 
involving multi-site. 

Hosts group == A set of named hosts within the same cluster 


 
 
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] SPICE IP override

2012-11-07 Thread Simon Grinberg


- Original Message -
 From: Michal Skrivanek michal.skriva...@redhat.com
 To: engine-devel@ovirt.org
 Sent: Tuesday, November 6, 2012 10:39:58 PM
 Subject: [Engine-devel] SPICE IP override
 
 Hi all,
 On behalf of Tomas - please check out the proposal for enhancing our
 SPICE integration to allow to return a custom IP/FQDN instead of the
 host IP address.
 http://wiki.ovirt.org/wiki/Features/Display_Address_Override
 All comments are welcome...

My 2 cents, 

This works under the assumption that all the users are either outside of the 
organization or inside. 
But think of some of the following scenarios based on a topology where users in 
the main office are inside the corporate network while users on remote offices 
/ WAN are on a detached different network on the other side of the NAT / public 
firewall :

With current 'per host override' proposal: 
1. Admin from the main office won't be able to access the VM console
2. No Mixed environment, meaning that you have to have designated clusters for 
remote offices users vs main office users - otherwise connectivity to the 
console is determined based on scheduler decision, or may break by live 
migration.
3. Based on #2, If I'm a user travelling between offices I'll have to ask the 
admin to turn off my VM and move it to internal cluster before I can reconnect

My suggestion is to covert this to 'alternative' IP/FQDN sending the spice 
client both internal fqdn/ip and the alternative. The spice client should 
detect which is available of the two and auto-connect. 

This requires enhancement of the spice client, but still solves all the issues 
raised above (actually it solves about 90% of the use cases I've heard about in 
the past). 

Another alternative is for the engine to 'guess' or 'elect' which to use, 
alternative or main, based on the IP of the client - meaning admin provides the 
client ranges for providing internal host address vs alternative - but this is 
more complicated compared for the previous suggestion  

Thoughts?
 

Simon.

 
 Thanks,
 michal
 
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] SPICE IP override

2012-11-07 Thread Simon Grinberg


- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Simon Grinberg si...@redhat.com
 Cc: engine-devel@ovirt.org
 Sent: Wednesday, November 7, 2012 10:46:24 AM
 Subject: Re: [Engine-devel] SPICE IP override
 
 On 11/07/2012 09:52 AM, Simon Grinberg wrote:
 
 
  - Original Message -
  From: Michal Skrivanek michal.skriva...@redhat.com
  To: engine-devel@ovirt.org
  Sent: Tuesday, November 6, 2012 10:39:58 PM
  Subject: [Engine-devel] SPICE IP override
 
  Hi all,
  On behalf of Tomas - please check out the proposal for enhancing
  our
  SPICE integration to allow to return a custom IP/FQDN instead of
  the
  host IP address.
  http://wiki.ovirt.org/wiki/Features/Display_Address_Override
  All comments are welcome...
 
  My 2 cents,
 
  This works under the assumption that all the users are either
  outside of the organization or inside.
  But think of some of the following scenarios based on a topology
  where users in the main office are inside the corporate network
  while users on remote offices / WAN are on a detached different
  network on the other side of the NAT / public firewall :
 
  With current 'per host override' proposal:
  1. Admin from the main office won't be able to access the VM
  console
  2. No Mixed environment, meaning that you have to have designated
  clusters for remote offices users vs main office users - otherwise
  connectivity to the console is determined based on scheduler
  decision, or may break by live migration.
  3. Based on #2, If I'm a user travelling between offices I'll have
  to ask the admin to turn off my VM and move it to internal cluster
  before I can reconnect
 
  My suggestion is to covert this to 'alternative' IP/FQDN sending
  the spice client both internal fqdn/ip and the alternative. The
  spice client should detect which is available of the two and
  auto-connect.
 
  This requires enhancement of the spice client, but still solves all
  the issues raised above (actually it solves about 90% of the use
  cases I've heard about in the past).
 
  Another alternative is for the engine to 'guess' or 'elect' which
  to use, alternative or main, based on the IP of the client -
  meaning admin provides the client ranges for providing internal
  host address vs alternative - but this is more complicated
  compared for the previous suggestion
 
  Thoughts?
 
 i think this is over complicating things.
 I'd expect someone that wants to handle internal and external
 differently to use DNS, and resolve the DNS differently for external
 and
 internal clients.

That will not necessarily solve the issue - what about WAN users from home? the 
DNS is not under their control - they need redirection to the public facing 
NAT servers. 

+ At least currently (and this must change, unless you accept the proposal I've 
raised) the engine sends fqdn if the display network in on the engine 
management network and IP on any other selected Display-Network.  

No DNS will help you in this case, so you still need alternate FQDN. 

 
 (note this is different from specifying the spice proxy address at
 cluster level, which is something you want user to choose if they
 want
 to enable or not per their location)
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] alias in disk instead of name

2012-10-24 Thread Simon Grinberg


- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Simon Grinberg si...@redhat.com
 Cc: Ewoud Kohl van Wijngaarden ew...@kohlvanwijngaarden.nl, 
 engine-devel engine-devel@ovirt.org
 Sent: Wednesday, October 24, 2012 6:01:53 AM
 Subject: Re: [Engine-devel] alias in disk instead of name
 
 On 10/23/2012 08:07 PM, Simon Grinberg wrote:
 
 
  - Original Message -
  From: Charlie medieval...@gmail.com
  To: Simon Grinberg si...@redhat.com
  Cc: engine-devel engine-devel@ovirt.org
  Sent: Tuesday, October 23, 2012 7:53:10 PM
  Subject: Re: [Engine-devel] alias in disk instead of name
 
  Why not something like this?
 
  (pseudocode, using dot for string concatenation):
 
  $name_prefix = vmdrive
  $name = get_last_used($name_prefix)
  $already_in_use = $TRUE
 
  while $already_in_use {
prompt Name of thing? [$name] , $name
if name_used($name) {
while name_used($name) {
   increment_number($name)
}
} else {
$already_in_use = FALSE
}
  }
 
  do_whatever_you_do_with($name)
 
  store_last_used($name)
 
  end
 
 
  The increment_number() routine checks to see if the last character
  is
  numeric, and if it is, increments the leftmost contiguous numeric
  portion of the string.  Otherwise it appends the number zero.
 
  This does not allow everyone to get any name they want, but you
  can't
  ever satisfy that demand.  It supplies reasonable defaults very
  quickly and it allows people who want really descriptive names to
  try
  as many as they like.
 
  The code's built the funny way it is so that you can corrupt the
  db
  that holds the last_used numbers or interrupt the process halfway
  through and it still works, only slower, and it should tend to fix
  its
  own db on the fly when possible.
 
  There's no provision for simultaneous creation, but that wouldn't
  be
  horribly hard to add, just put a lock on the resource holding
  last_used numbers.
 
  You'd want to reimplement in the most efficient and readable way
  for
  your programming language of choice.
 
  Did that make any sense?  I did it off the top of my head, so it
  could
  be terribly lame when I look at it tomorrow ;).
 
  Please don't look at it as pure programming item, nor as a single
  user in a small data center - in this respect you are right.
  Let's got to a huge organization or to the cloud.
 
  In multi tenant environment this lock means that every time a user
  tries to change a disk name - all the others are stack
  Don't forget we are discussing thousands of VMs - I'll hate to have
  this kind of lock just to allow for unique disk names. This is one
  of the reasons you use UUID to really identify the object in the
  DB, since it's suppose to guarantee uniqueness without the need to
  lock everything.
 
  And again - please look at this as an end user, why do I care that
  other users had decided they are going to use the same name as me?
  This is my human readable name and I want to choose what ever I
  like without considering other users. What is this self service
  portal worth if I can't name my VMs and Disks as I'd like to,
  oblivious to others?
 
  At the end of day, you want oVirt to be useful and scalable - and
  not just code wise correct.
 
 
 how about KISS?
 vm name is unique.
 disk name is unique in vm.
 treat disk name for search as vm.name + - + disk.name

Now we are getting somewhere since this is similar to my original proposal of 
adding vm/domain/other to the disk search criteria

But let me take your proposal a bit farther.
I think it's safe to assume / force that tenants don't share quotas, meaning a 
tenant may have multiple quotas but a quota may belong to a single tenant (and 
I know the term tenant is not well defined, but let's assume the under any 
definition for this discussion it may be collapsed to a collection of users and 
groups)

The problem is now reduced to keeping to scope boundaries.

Quota name is unique in the scope of a data center
VM name is unique in the scope of a quota (note that I intentionally don't say 
cluster)
Disk name is unique in the scope of a VM or the floating scope

Now to search is easy
For VMs - dc.quota.vm
For disks - dc.quota.vm.disk
Or For floating - dc.quota.floating.disk 
Shared disk may be accessed from any of the attached VMs

when Quota is off - you get the simple equivalent 
For VMs - dc.vm
For disks - dc.vm.disk
Or For floating - dc.floating.disk 
Shared disk may be accessed from any of the attached VMs

This is KISS, scalable, and I believe easy to understand by any user of oVirt.

And in order not to bother users with providing a unique name in the scope we 
should always offer a name for the user to just click OK or modify, similar 
(may be even simpler) algorithm to what Charlie suggested.
The above is for:
1. New disk
2. Detach disk from the last VM, meaning it becomes floating, if the name is 
not unique, then suggest a free name based

Re: [Engine-devel] alias in disk instead of name

2012-10-23 Thread Simon Grinberg


- Original Message -
 From: Charlie medieval...@gmail.com
 To: Simon Grinberg si...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org
 Sent: Tuesday, October 23, 2012 7:53:10 PM
 Subject: Re: [Engine-devel] alias in disk instead of name
 
 Why not something like this?
 
 (pseudocode, using dot for string concatenation):
 
 $name_prefix = vmdrive
 $name = get_last_used($name_prefix)
 $already_in_use = $TRUE
 
 while $already_in_use {
  prompt Name of thing? [$name] , $name
  if name_used($name) {
  while name_used($name) {
 increment_number($name)
  }
  } else {
  $already_in_use = FALSE
  }
 }
 
 do_whatever_you_do_with($name)
 
 store_last_used($name)
 
 end
 
 
 The increment_number() routine checks to see if the last character is
 numeric, and if it is, increments the leftmost contiguous numeric
 portion of the string.  Otherwise it appends the number zero.
 
 This does not allow everyone to get any name they want, but you can't
 ever satisfy that demand.  It supplies reasonable defaults very
 quickly and it allows people who want really descriptive names to try
 as many as they like.
 
 The code's built the funny way it is so that you can corrupt the db
 that holds the last_used numbers or interrupt the process halfway
 through and it still works, only slower, and it should tend to fix
 its
 own db on the fly when possible.
 
 There's no provision for simultaneous creation, but that wouldn't be
 horribly hard to add, just put a lock on the resource holding
 last_used numbers.
 
 You'd want to reimplement in the most efficient and readable way for
 your programming language of choice.
 
 Did that make any sense?  I did it off the top of my head, so it
 could
 be terribly lame when I look at it tomorrow ;).

Please don't look at it as pure programming item, nor as a single user in a 
small data center - in this respect you are right.
Let's got to a huge organization or to the cloud.

In multi tenant environment this lock means that every time a user tries to 
change a disk name - all the others are stack
Don't forget we are discussing thousands of VMs - I'll hate to have this kind 
of lock just to allow for unique disk names. This is one of the reasons you use 
UUID to really identify the object in the DB, since it's suppose to guarantee 
uniqueness without the need to lock everything.

And again - please look at this as an end user, why do I care that other users 
had decided they are going to use the same name as me? This is my human 
readable name and I want to choose what ever I like without considering other 
users. What is this self service portal worth if I can't name my VMs and Disks 
as I'd like to, oblivious to others?

At the end of day, you want oVirt to be useful and scalable - and not just code 
wise correct.  


 
 --Charlie
 
 On Tue, Oct 23, 2012 at 1:10 PM, Simon Grinberg si...@redhat.com
 wrote:
 
 
  - Original Message -
  From: Charlie medieval...@gmail.com
  To: Simon Grinberg si...@redhat.com
  Cc: engine-devel engine-devel@ovirt.org
  Sent: Tuesday, October 23, 2012 6:51:35 PM
  Subject: Re: [Engine-devel] alias in disk instead of name
 
  OK, only because you asked...
 
  Provide default unique names, so that users can just press enter
  if
  names don't matter to them.  That way you obviate the entire
  argument;
  people who need special naming can have it, and everybody else has
  a
  single extra keypress or mouseclick at naming time, and searching
  works well enough.
 
  You can name the first one vmdrive0 and increment the numeric part
  each time a new drive is created.  Iterating until an unused name
  is
  found isn't so computationally expensive that anyone should weep,
  especially if you store the last used number and do an
  incrementing
  sanity check against it at naming time.
 
  Let's say the above solved all conflicts when coming to create a
  new disk, it does seems so.
 
  Let's say that import/export if names conflict can be solved in a
  reasonable way - for example forcing (somehow and without
  bothering the user too much) a rename of the disk (How would you
  know if the conflicting name id auto-generated so can be replaced
  or user provided?, you'll have to resort to
  non-that-human-look-alike-name)
 
  How does it solve the multi-tenancy use case?
  I'm tenant A, setting up a quorum disk for my two VMs - so I call
  this disk simply quorum.
  Now comes tenant B, he is also setting up a quorum disk, so he
  tries to call his disk quorum
 
  But no,
  He'll get a popup that this name is already taken - bad luck buddy.
  Now he needs to guess the next available name? Would you build in
  algorithm to suggest alternatives?
  Why should tenant B care in the first place that tenant A also
  wanted to call his disk 'quorum'?
 
  Same with the VM name - but that is given for now, though I hope to
  convince it should change in the future.
 
  What I'm trying to say here - infrastructure

Re: [Engine-devel] alias in disk instead of name

2012-10-22 Thread Simon Grinberg


- Original Message -
 From: Michael Pasternak mpast...@redhat.com
 To: Simon Grinberg si...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org
 Sent: Monday, October 22, 2012 8:58:25 AM
 Subject: Re: [Engine-devel] alias in disk instead of name
 
 On 10/21/2012 06:13 PM, Simon Grinberg wrote:
  
  
  - Original Message -
  From: Michael Pasternak mpast...@redhat.com
  To: Simon Grinberg si...@redhat.com
  Cc: engine-devel engine-devel@ovirt.org
  Sent: Sunday, October 21, 2012 4:56:33 PM
  Subject: Re: [Engine-devel] alias in disk instead of name
 
  On 10/21/2012 04:15 PM, Simon Grinberg wrote:
 
 
  - Original Message -
  From: Michael Pasternak mpast...@redhat.com
  To: Simon Grinberg si...@redhat.com
  Cc: engine-devel engine-devel@ovirt.org
  Sent: Sunday, October 21, 2012 3:48:46 PM
  Subject: Re: [Engine-devel] alias in disk instead of name
 
  On 10/21/2012 03:36 PM, Simon Grinberg wrote:
 
  - Original Message -
  From: Michael Pasternak mpast...@redhat.com
  To: engine-devel engine-devel@ovirt.org
  Sent: Sunday, October 21, 2012 12:26:46 PM
  Subject: [Engine-devel] alias in disk instead of name
 
 
  The problem we caused by using alias in disk instead of name
  is
  break
  of search-by-name paradigm
  in engine.search dialect, not sure why we do not want forcing
  disk
  name to be unique [1],
  but lack of name in disk search is does not look good in my
  view.
 
  thoughts?
 
  [1] can be easily achieved via appropriate can-do-action
  verification.
  Names by definition are not unique IDs,
 
  they do, otherwise /search wasn't effective, remember users not
  exposed to entity id, all entities fetched by-name, so names has
  to
  be unique.
 
  Yap that is what we do with many entities, and it causes
  problems.
  But with disks it is multiplied
 
 
  thus it should not be enforced.
  What would be the auto naming conversion to ensure uniqueness
  with
  plain text?
 
  not sure i follow, i'll assume you refer here to empty name, -
  you
  cannot have an
  entity with no name.
 
  Well you create a new disk - do we want to enforce the user to
  provide a unique disk name/alias for every disk he creates?
  This will drive the user crazy. This is important even for user
  only for floating/shared disks. For any other disks user does not
  care if it's disk1, hd1, whatever. For these kind of disk, it's
  just a VM disk and the user does not care if in all VMs this is
  called disk 1 - so why bother him?
 
  from the same reason we have unique
  clusters/datacenters/networks/templates/etc...
  
  Networks, DataCenter, Clusters, templates - are in order of
  magnitude less then the number of disks.
  And you name once and use many.
  
  As for VMs - well it's may take that we should not force uniqueness
  either ( you can warn though )
 
 you cannot have two vms with same name in same domain ...

I didn't say that in a domain you are allowed to have two guests with the same 
hostname, I've said engine should allow for having duplicate VM names.
You are assuming that the VM name is identical to the guest host name. 

For many this is the case, for other it's just an alias/name given in oVirt. 
Actually for the cloud, this is mostly going to be the case and worse, you are 
blocking different tenants from having the same VM name just because you are 
assuming that VM name = guest hostname.  

 
  
  For disks, well number is = VMs to = VMs
  Name by definition is mostly interesting in many cases only within
  the VM, and we don't even have a way to correlate disk alias to
  the internal name in the VM. In many cases as said before, a user
  won't care about the name/alias if it is always attached to the
  same VM. A user will rather look the VM and then list it's disk.
  So actually I'll be better off with vm1.disk1 vm2.disk2 then
  unique name per disk (PS AFAIK) this should be the default
  suggested name by the UI, but then changing the VM name will break
  this (yes, I know it's not possible ATM, but many people I know
  requested for that).
  
  So I as user will prefer that all the disks that created from a
  template will have the same name as the original template, and
  then to be able to search by (vm=name, disk=name) thus I can
  access easily the same disk for the VMs.
  
  On the other hand for others, as you've mentioned (especially for
  floating and shared disk) the name/alias may be of importance,
  uniqueness may be very important.
 
 any disk can become shared.

Then when you make it shared then bother to give it a meaningful alias

 
  
  All that I'm saying that we can't force it's not that uniqueness in
  never desired.
 
 simon, you missing the point, i was talking about /search,
 search is available only at /api/disks (i.e shared disks,
 vm/template.disks is
 irrelevant to this discussion)

Nope I do not, but I think that our perspectives differ.
You are looking at it as strictly design issue. You have a collection of 
entities and you

Re: [Engine-devel] alias in disk instead of name

2012-10-21 Thread Simon Grinberg


- Original Message -
 From: Michael Pasternak mpast...@redhat.com
 To: engine-devel engine-devel@ovirt.org
 Sent: Sunday, October 21, 2012 12:26:46 PM
 Subject: [Engine-devel] alias in disk instead of name
 
 
 The problem we caused by using alias in disk instead of name is break
 of search-by-name paradigm
 in engine.search dialect, not sure why we do not want forcing disk
 name to be unique [1],
 but lack of name in disk search is does not look good in my view.
 
 thoughts?
 
 [1] can be easily achieved via appropriate can-do-action
 verification.

Names by definition are not unique IDs, thus it should not be enforced.
What would be the auto naming conversion to ensure uniqueness with plain text?
Would you change these on import/export?
And so on... 

You should treat the name as a tag/alias that if you bothered to update, 
probably means something to you, if not then you don't care anyhow and will not 
search by it anyhow. So it's up to the user what to assign.

   

 
 background:
 ==
 
 On 10/15/2012 02:09 PM, Einav Cohen wrote:
  we didn't exactly renamed name to alias; name is an automatic
  identifier of the disk, which is: Disk n, n=internal drive
  mapping; alias is a *user-defined*
 identifier.
 
  IIRC, once we understood that we need a user-defined identifier for
  the disk business entity, we indeed had in mind re-using name,
  however, the name field in other
 business entities is unique across the system, and we didn't want the
 disk user-defined identifier to be unique, so we preferred to not
 (re)use the term name and came up
 with alias, to avoid confusion.
 
  - Original Message -
  From: Michael Pasternak mpast...@redhat.com
  To: Einav Cohen eco...@redhat.com
  Sent: Monday, October 15, 2012 1:58:26 PM
  Subject: alias in disk instead of name
 
  hi,
 
  can you remind me why did we renamed name to alias in disk?
 
  --
 
  Michael Pasternak
  RedHat, ENG-Virtualization RD
 
 
 --
 
 Michael Pasternak
 RedHat, ENG-Virtualization RD
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] alias in disk instead of name

2012-10-21 Thread Simon Grinberg


- Original Message -
 From: Michael Pasternak mpast...@redhat.com
 To: Simon Grinberg si...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org
 Sent: Sunday, October 21, 2012 3:48:46 PM
 Subject: Re: [Engine-devel] alias in disk instead of name
 
 On 10/21/2012 03:36 PM, Simon Grinberg wrote:
  
  - Original Message -
   From: Michael Pasternak mpast...@redhat.com
   To: engine-devel engine-devel@ovirt.org
   Sent: Sunday, October 21, 2012 12:26:46 PM
   Subject: [Engine-devel] alias in disk instead of name
   
   
   The problem we caused by using alias in disk instead of name is
   break
   of search-by-name paradigm
   in engine.search dialect, not sure why we do not want forcing
   disk
   name to be unique [1],
   but lack of name in disk search is does not look good in my
   view.
   
   thoughts?
   
   [1] can be easily achieved via appropriate can-do-action
   verification.
  Names by definition are not unique IDs,
 
 they do, otherwise /search wasn't effective, remember users not
 exposed to entity id, all entities fetched by-name, so names has to
 be unique.

Yap that is what we do with many entities, and it causes problems.
But with disks it is multiplied  

 
  thus it should not be enforced.
  What would be the auto naming conversion to ensure uniqueness with
  plain text?
 
 not sure i follow, i'll assume you refer here to empty name, - you
 cannot have an
 entity with no name.

Well you create a new disk - do we want to enforce the user to provide a unique 
disk name/alias for every disk he creates?
This will drive the user crazy. This is important even for user only for 
floating/shared disks. For any other disks user does not care if it's disk1, 
hd1, whatever. For these kind of disk, it's just a VM disk and the user does 
not care if in all VMs this is called disk 1 - so why bother him?  
   
 
  Would you change these on import/export?
 
 would you mind elaborating on this?

Yes, 

You are already facing a problem when importing VMs that already have the same 
name, now you increasing the problem for disks that have the same alias. for 
same name we force clone if you want to import. Why for clone just because of a 
disk alias (this implies collapse snapshots ATM) or even bother the user with 
renaming disks that he does not care about the name so he just gave disk 1, 2, 
3 and so on?


 
 
  And so on...
  
  You should treat the name as a tag/alias that if you bothered to
  update, probably means something to you, if not then you don't
  care anyhow and will not search by it anyhow. So it's up to the
  user what to assign.
 
 simon, we do not have any /name today in disk, you see it in api
 for backward compatibility, actually it's emulated over /alias,

Correct, we don't have name but alias especially because we wanted to emphasize 
it's not unique.
But I've thought that aliases are treated like name, thus you have all the 
issues and suggesting to make them unique, and I'm trying to explain what we 
should try to avoid from having them unique. 

I agree, it's not fun when a you have 5 floating disk sharing the exact same 
alias - but maybe it should be the user responsibility to decide whether he is 
going to allow for it or not?

 
 and the problem is when want to search by-name, it's not included
 in backend search.
 
  
 
  
 
 
 --
 
 Michael Pasternak
 RedHat, ENG-Virtualization RD
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] alias in disk instead of name

2012-10-21 Thread Simon Grinberg


- Original Message -
 From: Michael Pasternak mpast...@redhat.com
 To: Simon Grinberg si...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org
 Sent: Sunday, October 21, 2012 4:56:33 PM
 Subject: Re: [Engine-devel] alias in disk instead of name
 
 On 10/21/2012 04:15 PM, Simon Grinberg wrote:
  
  
  - Original Message -
  From: Michael Pasternak mpast...@redhat.com
  To: Simon Grinberg si...@redhat.com
  Cc: engine-devel engine-devel@ovirt.org
  Sent: Sunday, October 21, 2012 3:48:46 PM
  Subject: Re: [Engine-devel] alias in disk instead of name
 
  On 10/21/2012 03:36 PM, Simon Grinberg wrote:
 
  - Original Message -
  From: Michael Pasternak mpast...@redhat.com
  To: engine-devel engine-devel@ovirt.org
  Sent: Sunday, October 21, 2012 12:26:46 PM
  Subject: [Engine-devel] alias in disk instead of name
 
 
  The problem we caused by using alias in disk instead of name is
  break
  of search-by-name paradigm
  in engine.search dialect, not sure why we do not want forcing
  disk
  name to be unique [1],
  but lack of name in disk search is does not look good in my
  view.
 
  thoughts?
 
  [1] can be easily achieved via appropriate can-do-action
  verification.
  Names by definition are not unique IDs,
 
  they do, otherwise /search wasn't effective, remember users not
  exposed to entity id, all entities fetched by-name, so names has
  to
  be unique.
  
  Yap that is what we do with many entities, and it causes problems.
  But with disks it is multiplied
  
 
  thus it should not be enforced.
  What would be the auto naming conversion to ensure uniqueness
  with
  plain text?
 
  not sure i follow, i'll assume you refer here to empty name, - you
  cannot have an
  entity with no name.
  
  Well you create a new disk - do we want to enforce the user to
  provide a unique disk name/alias for every disk he creates?
  This will drive the user crazy. This is important even for user
  only for floating/shared disks. For any other disks user does not
  care if it's disk1, hd1, whatever. For these kind of disk, it's
  just a VM disk and the user does not care if in all VMs this is
  called disk 1 - so why bother him?
 
 from the same reason we have unique
 clusters/datacenters/networks/templates/etc...

Networks, DataCenter, Clusters, templates - are in order of magnitude less then 
the number of disks. 
And you name once and use many.

As for VMs - well it's may take that we should not force uniqueness either ( 
you can warn though )

For disks, well number is = VMs to = VMs 
Name by definition is mostly interesting in many cases only within the VM, and 
we don't even have a way to correlate disk alias to the internal name in the 
VM. In many cases as said before, a user won't care about the name/alias if it 
is always attached to the same VM. A user will rather look the VM and then list 
it's disk. So actually I'll be better off with vm1.disk1 vm2.disk2 then unique 
name per disk (PS AFAIK) this should be the default suggested name by the UI, 
but then changing the VM name will break this (yes, I know it's not possible 
ATM, but many people I know requested for that).

So I as user will prefer that all the disks that created from a template will 
have the same name as the original template, and then to be able to search by 
(vm=name, disk=name) thus I can access easily the same disk for the VMs.

On the other hand for others, as you've mentioned (especially for floating and 
shared disk) the name/alias may be of importance, uniqueness may be very 
important. 

All that I'm saying that we can't force it's not that uniqueness in never 
desired. 

 
 
 
  Would you change these on import/export?
 
  would you mind elaborating on this?
  
  Yes,
  
  You are already facing a problem when importing VMs that already
  have the same name, now you increasing the problem for disks that
  have the same alias. for same name we force clone if you want to
  import. Why for clone just because of a disk alias (this implies
  collapse snapshots ATM) or even bother the user with renaming
  disks that he does not care about the name so he just gave disk 1,
  2, 3 and so on?
 
 i see your point, but then we leave no option for the user to locate
 the disk,
 simply because he doesn't have unique identifier,
 
 just imagine user A creating disk and calling it X,
 then user B creating disk and calling it X, they on different
 domains etc., and now both want to use disk X,
 
 how they can figure out which one to pick?, by SD, by size? agree
 this is doesn't look well..., even more than that - someone may call
 this bad design...

This is why the search should accept more then the name. 
Example (vm=name, disk=name/alias)
Example (dc=name, disk=name/alias)
Example (sd=name, disk=name/alias)
For floating/shared on the same SD/DC/VM I would suggest a warning if there is 
a duplicate in the system - not enforcement. 
There is a difference between best practice and being enforcing up to the level

Re: [Engine-devel] Network related hooks in vdsm

2012-10-10 Thread Simon Grinberg


- Original Message -
 From: Igor Lvovsky ilvov...@redhat.com
 To: vdsm-devel vdsm-de...@lists.fedorahosted.org, engine-devel 
 engine-devel@ovirt.org
 Cc: Dan Yasny dya...@redhat.com
 Sent: Wednesday, October 10, 2012 4:47:28 PM
 Subject: [Engine-devel] Network related hooks in vdsm
 
 Hi everyone,
 As you know vdsm has hooks mechanism and we already support dozen of
 hooks for different needs.
 Now it's a network's time.
 We would like to get your comments regarding our proposition for
 network related hooks.
 
 In general we are planning to prepare framework for future support of
 bunch network related hooks.
 Some of them already proposed by Itzik Brown [1] and Dan Yasny [2].
 
 Below you can find the additional hooks list that we propose:
 
 Note: In the first stage we can implement these hooks without any
 parameters, just to provide an entry point
  for simple hooks.
 
 Networks manipulation:
 - before_add_network(conf={}, customProperty={})
 - after_add_network(conf={}, customProperty={})
 - before_del_network(conf={}, customProperty={})
 - after_del_network(conf={}, customProperty={})
 - before_edit_network(conf={}, customProperty={})
 - after_edit_network(conf={}, customProperty={})
 - TBD

+ VM networking related points in addition to before/after vm start/stop
before_hotplug_nic(conf={}, customProperty={})
*after_hotplug_nic(conf={}, customProperty={})
before_hotunplug_nic(conf={}, customProperty={})
*after_hotunplug_nic(conf={}, customProperty={})

* Not sure about use case for those two 

The above will require VM IDs and networks? Sorry that I did not look into the 
actual implementation of the above naming is more of a guess work, but I think 
the meaning is clear. There may be other VM networking related entry points 
I've missed. 

 
 Bondings manipulations:
 - before_add_bond(conf={}, customProperty={})
 - after_add_bond(conf={}, customProperty={})
 - before_del_bond(conf={}, customProperty={})
 - after_del_bond(conf={}, customProperty={})
 - before_edit_bond(conf={}, customProperty={})
 - after_edit_bond(conf={}, customProperty={})
 - TBD
 
 General purpose:
 - before_persist_network
 - after_persist_network
 
 
 Now we just need to figure out the use cases.
 
 Your input more than welcome...
 
 [1] http://gerrit.ovirt.org/#/c/7224/   - Adding hooks support for
 NIC hotplug
 [2] http://gerrit.ovirt.org/#/c/7547/   - Hook: Cisco VM-FEX support
 
 
 Regards,
 Igor Lvovsky
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] Network related hooks in vdsm

2012-10-10 Thread Simon Grinberg


- Original Message -
 From: Igor Lvovsky ilvov...@redhat.com
 To: Simon Grinberg si...@redhat.com
 Cc: Dan Yasny dya...@redhat.com, engine-devel engine-devel@ovirt.org, 
 vdsm-devel
 vdsm-de...@lists.fedorahosted.org
 Sent: Wednesday, October 10, 2012 5:27:13 PM
 Subject: Re: [Engine-devel] Network related hooks in vdsm
 
 
 
 - Original Message -
  From: Simon Grinberg si...@redhat.com
  To: Igor Lvovsky ilvov...@redhat.com
  Cc: Dan Yasny dya...@redhat.com, vdsm-devel
  vdsm-de...@lists.fedorahosted.org, engine-devel
  engine-devel@ovirt.org
  Sent: Wednesday, October 10, 2012 5:03:52 PM
  Subject: Re: [Engine-devel] Network related hooks in vdsm
  
  
  
  - Original Message -
   From: Igor Lvovsky ilvov...@redhat.com
   To: vdsm-devel vdsm-de...@lists.fedorahosted.org,
   engine-devel engine-devel@ovirt.org
   Cc: Dan Yasny dya...@redhat.com
   Sent: Wednesday, October 10, 2012 4:47:28 PM
   Subject: [Engine-devel] Network related hooks in vdsm
   
   Hi everyone,
   As you know vdsm has hooks mechanism and we already support dozen
   of
   hooks for different needs.
   Now it's a network's time.
   We would like to get your comments regarding our proposition for
   network related hooks.
   
   In general we are planning to prepare framework for future
   support
   of
   bunch network related hooks.
   Some of them already proposed by Itzik Brown [1] and Dan Yasny
   [2].
   
   Below you can find the additional hooks list that we propose:
   
   Note: In the first stage we can implement these hooks without any
   parameters, just to provide an entry point
for simple hooks.
   
   Networks manipulation:
   - before_add_network(conf={}, customProperty={})
   - after_add_network(conf={}, customProperty={})
   - before_del_network(conf={}, customProperty={})
   - after_del_network(conf={}, customProperty={})
   - before_edit_network(conf={}, customProperty={})
   - after_edit_network(conf={}, customProperty={})
   - TBD
  
  + VM networking related points in addition to before/after vm
  start/stop
  before_hotplug_nic(conf={}, customProperty={})
  *after_hotplug_nic(conf={}, customProperty={})
  before_hotunplug_nic(conf={}, customProperty={})
  *after_hotunplug_nic(conf={}, customProperty={})
  
  * Not sure about use case for those two
  
 
 Yep, part of them already proposed by Itzik (look at [1])

This is what happen when you miss out the end of the email :)

But if I'm reading the comment correctly it indeed doesn't implement two of the 
4 above. My '*' was in the wrong location, indeed the two suggested in the 
patch are the two I can find a use case for, while the other two I'm not sure 
of but I think should be implemented for completeness.  


 
  The above will require VM IDs and networks? Sorry that I did not
  look
  into the actual implementation of the above naming is more of a
  guess work, but I think the meaning is clear. There may be other VM
  networking related entry points I've missed.
  
   
   Bondings manipulations:
   - before_add_bond(conf={}, customProperty={})
   - after_add_bond(conf={}, customProperty={})
   - before_del_bond(conf={}, customProperty={})
   - after_del_bond(conf={}, customProperty={})
   - before_edit_bond(conf={}, customProperty={})
   - after_edit_bond(conf={}, customProperty={})
   - TBD
   
   General purpose:
   - before_persist_network
   - after_persist_network
   
   
   Now we just need to figure out the use cases.
   
   Your input more than welcome...
   
   [1] http://gerrit.ovirt.org/#/c/7224/   - Adding hooks support
   for
   NIC hotplug
   [2] http://gerrit.ovirt.org/#/c/7547/   - Hook: Cisco VM-FEX
   support
   
   
   Regards,
   Igor Lvovsky
   ___
   Engine-devel mailing list
   Engine-devel@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/engine-devel
   
  
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] allowing user taking vm from vmpool in user-api

2012-10-03 Thread Simon Grinberg


- Original Message -
 From: Michael Pasternak mpast...@redhat.com
 To: engine-devel engine-devel@ovirt.org, Miki Kenneth 
 mkenn...@redhat.com
 Cc: Michal Skrivanek mskri...@redhat.com
 Sent: Wednesday, October 3, 2012 6:13:39 PM
 Subject: [Engine-devel] allowing user taking vm from vmpool in user-api
 
 
 Apparently we have gap in vmpool for user-api, we need make users
 being
 able allocating vm/s to themselves,
 
 it can be done via new action on vmpool,
 
 URI: /api/vmpools/xxx/allocate|rel=allocate
 
 thoughts?

What is the response a VM ref?  

 
 
 --
 
 Michael Pasternak
 RedHat, ENG-Virtualization RD
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] RFE: Netwrok Main Tab

2012-10-02 Thread Simon Grinberg
Trying to sum up some of the suggestions ('coincidently' dropping those which I 
think a bit too much for first implementation :)) and add some suggestions of 
my own 

1. On the hosts subtab:
   1.1 Have a radio button that will show either 
   1.1.1 All the hosts that this network is attached to
   1.1.2 All the hosts where this network attached to the cluster but not 
to the host (Very important for non-required where the host status does not 
indicate something is missing)
   1.2 Have a remove button for 1.1.1, ManageNetworks button for 1.1.2. Simple 
add will not do since you don't know where to add. 

2. On the vms subtab 
   2.1 Have a radio button that will show either 
   2.1.1 All the vms that this network is attached to
   2.1.2 All the vms where this network is not attached to
   2.2 Have a 'remove' button for 2.1.1, Have 'add' button for 2.1.2
   2.3 Allow multiple selection for both actions of 2.2
   2.4 Add remove from all button. 
   2.5 I would not bother to show all the nics attached to the VM from the same 
network, it's too rare. Just make sure there is no exception if this does 
exist. So the columns should have 'nic' as the first and there should be only 
one VM per line. If there are more then one nic per VM then just indicate 
'multiple' 

3. Templates subtab - same as VM, drop the expansion of the NICs list.

4. Clusters subtab
   Allow assign to multiple clusters - same as the edit in the data center tab 
(Just use the same dialogue)

5. Main: You have enough space then why not add the MTU column?

6. Queries for the sub tabs: For each needs the reverse query as well (Probably 
more important when adding new network)  


Oops, I think I mostly added and dropped (almost) nothing :)   

Regards,
Simon
 

- Original Message -
 From: Avi Tal a...@redhat.com
 To: Yaniv Kaul yk...@redhat.com
 Cc: engine-devel@ovirt.org
 Sent: Monday, September 24, 2012 10:13:52 AM
 Subject: Re: [Engine-devel] RFE: Netwrok Main Tab
 
 
 
 - Original Message -
  From: Yaniv Kaul yk...@redhat.com
  To: Moti Asayag masa...@redhat.com
  Cc: engine-devel@ovirt.org
  Sent: Sunday, September 23, 2012 6:16:47 PM
  Subject: Re: [Engine-devel] RFE: Netwrok Main Tab
  
  On 09/23/2012 06:12 PM, Moti Asayag wrote:
   On 09/23/2012 05:01 PM, Yaniv Kaul wrote:
   On 09/23/2012 04:55 PM, Alona Kaplan wrote:
   - Original Message -
   From: Avi Tal a...@redhat.com
   To: Alona Kaplan alkap...@redhat.com
   Cc: engine-devel@ovirt.org
   Sent: Sunday, September 23, 2012 4:17:26 PM
   Subject: Re: [Engine-devel] RFE: Netwrok Main Tab
  
  
  
   - Original Message -
   From: Alona Kaplan alkap...@redhat.com
   To: Avi Tal a...@redhat.com
   Cc: engine-devel@ovirt.org
   Sent: Sunday, September 23, 2012 1:31:32 PM
   Subject: Re: [Engine-devel] RFE: Netwrok Main Tab
  
   1. What actions do you suggest to add?
   Add, Edit and remove actions of each component.
  
   Host:
   Add- What will add action on Networks-Hosts sub tab do? Choose
   an
   existing host and attach the network to one of its nics?
   How will it work? after choosing the host you will open
   setupNetworks
   window and just enable dragging of the selected network (in the
   main
   tab) to a nic? I think it is too much.
  
   Edit- same as add.
  
   Remove- What is the meaning of Remove host in network's
   context?
   The
   network will be detached from the host? I think it is
   confusing.
  
   Vm/Template:
   Add: What will it do? You will choose a vm and then add a vnic
   to
   the
   vm? Where will you see the vnic details?
   Edit: Same as add.
   Remove: You will remove all the vnics that use the selected
   network
   from the vm? Or do you mean to add a remove per vnic?
   For all the above: yes.
   We can certainly work on the small details, but having a main
   tab
   with
   little to no action whatsoever is kinda disappointing.
   IMO adding 'assign' action in the main tab might be handy in
   order
   to
   assign a single network in a single dialog to several clusters.
  
   However It is not clear to me how Add/Edit for a VM in a sub-tab
   should
   look like. The VM should have a list of interfaces (represented
   as
   sub
   collection/tree).
  
  I was thinking more of right-click on the network, selecting 'Add
  to
  VM'
  and then selecting single/mutliple VMs from a dialog with all the
  VMs
  (yes, filtered via search).
  Y.
  
 
 +1
 
   What will be the meaning of 'Add' in that context? Since the VM
   already
   have a vNIC attached to that network. If adding another once, it
   should
   be enabled on the record representing the VM itself which will
   confuse
   the user.
   Same goes for 'Edit' of a VM interface under a context of a
   specific
   network: in the current 'Edit VM Interface' you can change the
   network.
   Should the same dialog be used here as well?
  
   The 'Remove' option is the clearer action on this context.
  
  
   (example of 'small details' - 

Re: [Engine-devel] Adding VNC support

2012-07-26 Thread Simon Grinberg


- Original Message -
 From: snmis...@linux.vnet.ibm.com
 To: engine-devel@ovirt.org
 Sent: Thursday, July 26, 2012 5:36:43 PM
 Subject: [Engine-devel] Adding VNC support
 
 
 Hi,
 
 I am looking at adding VNC support in ovirt. What does the
 community think? Ideas, suggestions, comments?

If you can, I think it will be welcomed.

The problem (as I recall, and I may be wrong) is that there is no VNC xpi 
available, thus connection is not possible directly through the portal. If you 
want to use VNC it is possible today, you'll have to set the ticket/password 
via the API and the connect with VNC viewer.

With that said, my personal opinion is that it's not necessary except for those 
who really like VNC. SPICE has an available XPI, and when you don't use the 
Spice Drivers the default mode is in par with VNC. So why to bother?


 
 Thanks
 Sharad Mishra
 IBM
 
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] Is ovirt-engine-cli expected for 3.1?

2012-07-24 Thread Simon Grinberg


- Original Message -
 From: Steve Gordon sgor...@redhat.com
 To: engine-devel@ovirt.org
 Sent: Monday, July 23, 2012 6:30:56 PM
 Subject: [Engine-devel] Is ovirt-engine-cli expected for 3.1?
 
 Hi guys,
 
 On the release notes draft [1] one of the items is New Python CLI,
 packaged as ??? (CLI).. The CLI page on the wiki says the package
 is ovirt-engine-cli, but this package doesn't exist in the beta
 repo. Is the CLI actually being delivered?

Hopefully. 
You guys are already writing the book :)

 
 Thanks,
 
 Steve
 
 [1] http://wiki.ovirt.org/wiki/Release_Notes_Draft
 [2] http://wiki.ovirt.org/wiki/CLI
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] Fwd: Problem in REST API handling/displaying of logical networks

2012-07-05 Thread Simon Grinberg


- Original Message -
 From: Michael Pasternak mpast...@redhat.com
 To: Livnat Peer lp...@redhat.com
 Cc: engine-devel engine-devel@ovirt.org, Simon Grinberg 
 si...@redhat.com
 Sent: Thursday, July 5, 2012 11:31:41 AM
 Subject: Re: [Engine-devel] Fwd: Problem in REST API handling/displaying of 
 logical networks
 
 On 07/05/2012 10:51 AM, Livnat Peer wrote:
  Actually the API has the same concept as you suggest for
  storage
   domains.
   At the top level you don't have a status field, but under
   data
   center level, where it's valid then you get the status
   property.
  
   Same should go for networks.
   The status property should be added only where it's valid,
   in
   this
   case the cluster level sub-collection
  
   so sounds like we want to declare these properties
   deprecated to be
   able
   to remove them in a future version?
  
   I guess so,
   The question is, are there other location where the status
   property
   (or any other property) exists at an irrelevant level. Unless
   we
   want to go into the effort of mapping them all now we probably
   need
   to define a concept and anything not complying to that is a
   bug that
   is allowed to be fixed without considering it as breaking the
   API.
  
   Thoughts?
  
   +1
   I agree that this is a bug and I DO suggest we  go into the
   effort of reviewing the other objects as well.
   This is too major to just fix this one, and wait until we bump
   into another one...
  Mike i see there a general consensus that this is a bug and the top
  level entity should be a DC network.
 
 i disagree that status should be completely removed, instead as bug
 fix it
 should contain different members: ATTACHED|UNATTACHED (same concept
 we using in
 /api/storagedomains/xxx)

With storage domains attached/unattached is generally a 1:1 so it may make 
sense in a way.
* not sure it's going to be in the future with shared read only export domain
* It's probably wrong even today with ISO domain in case that the setup 
contains more then one DC.

With Networks it fore may be attached to partial collection on clusters, de 
facto that will only say it is in uses by at least one cluster.

So in both cases this is wrong, 

If you insist on maintaining this property the only valid values that I can see 
ATM is INUSE vs UNUSED. - This should be true both for storage domains and 
logical networks. 


 
  Can you please open a bug / update the existing bug to reflect
  that.
  
  Thanks, Livnat
  
  
  
 
 
 --
 
 Michael Pasternak
 RedHat, ENG-Virtualization RD
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] 'deactivate' disk - what's the use case?

2012-06-17 Thread Simon Grinberg


- Original Message -
 From: Yaniv Kaul yk...@redhat.com
 To: Livnat Peer lp...@redhat.com
 Cc: engine-devel@ovirt.org
 Sent: Tuesday, June 12, 2012 2:14:00 PM
 Subject: Re: [Engine-devel] 'deactivate' disk - what's the use case?
 
 On 06/12/2012 12:47 PM, Livnat Peer wrote:
  On 12/06/12 12:40, Yaniv Kaul wrote:
  On 06/12/2012 12:34 PM, Itamar Heim wrote:
  On 06/12/2012 12:25 PM, Yaniv Kaul wrote:
  I'm wondering what's the usefulness of having dual action of
  attach +
  activate to get a disk properly attached and working in a VM
  (and the
  deactivate and detach counterparts).
 
  The only reason I can think of is that we've annoyed the user by
  this
  useless dual action when working with storage domains in a data
  center
  for ages, and we wish to remain consistent and annoy the user in
  the
  disks scenario as well, but there may be a reason I'm not aware
  of.
  deactivated is like having a disk in offline, or hot unplugging
  when
  you still want to retain it in the context of the vm
  configuration
  I understand that, I just argue it's quite useless (offline can be
  done
  from within the guest OS),
  You can deactivate the disk if for some reason it blocks the guest
  from
  starting. I think that if the disk not accessible the VM won't
  start and
  then you can deactivate the disk and start the VM.
 
 You can also detach it to get the same effect with one less click of
 a
 button or an API call.

And then if you did it manually for 20 VMs as a temporary measure, you'll have 
to re-attach it to the VMs. Will you remember to which VM you should attach 
each of your floating disks? You may have to now consult event log. 
(And in any case you'll need better search capabilities on disk then what you 
have now)

There are use cases to have an off-line disks, the issue is how not to annoy 
the user. 
I think I've suggested in the past that by default:
Attach = Attach + set_Online
Detach = set_offline + Detach. 
Unless explicitly stated otherwise by the user.

Thus you do not annoy the user but still maintain the functionality.


 Y.
 
 
 
  does not work that way in physical hardware
  (offline is a logical action within the OS), has very little value
  to
  the RHEV Admin (unless he's paranoid and afraid that the disk will
  become float and someone else would 'steal' it from his VM) and is
  annoying to require multiple actions.
  Y.
 
  TIA,
  Y.
  ___
  Engine-devel mailing list
  Engine-devel@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/engine-devel
 
  ___
  Engine-devel mailing list
  Engine-devel@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/engine-devel
 
 
 ___
 Engine-devel mailing list
 Engine-devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/engine-devel
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] engine complained that it couldn't find the ovirtmgmt interface in my FC16 ovirt node

2012-06-05 Thread Simon Grinberg
If you go to the node and run 

vdsClient -s 0 getVdsCapabilities what is the response? 
You are suppose to get the network topology as reported by vdsm


- Original Message -
 From: Shu Ming shum...@linux.vnet.ibm.com
 To: engine-devel@ovirt.org
 Sent: Tuesday, June 5, 2012 7:32:51 PM
 Subject: [Engine-devel] engine complained that it couldn't find the ovirtmgmt 
 interface in my FC16 ovirt node
 
 Hi,
 
 I found  the followings logs in my engine and engine set the node to
 non-operation state.   It is strange that I can see the ovirtmgmt
 network interface was there in the ovirt node.  Below are the logs
 and
 steps I did in ovirt node.  It seems that engine didn't get the right
 network interface information from ovirt node.  Can any one show me
 some
 steps to debug this problem?
 
 
 In my engine log:
 
 2012-06-06 00:45:00,185 INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
 (QuartzScheduler_Worker-68) [897fad0] START,
 SetVdsStatusVDSCommand(vdsId = 848bcff0-ae2a-11e1-bb49-5254001498c4,
 status=NonOperational, nonOperationalReason=NETWORK_UNREACHABLE), log
 id: 53d27ec8
 2012-06-06 00:45:00,188 INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
 (QuartzScheduler_Worker-68) [897fad0] FINISH, SetVdsStatusVDSCommand,
 log id: 53d27ec8
 2012-06-06 00:45:00,190 INFO
 [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
 (QuartzScheduler_Worker-68) [897fad0] Host ovirt-node1 is set to
 Non-Operational, it is missing the following networks: ovirtmgmt,
 2012-06-06 00:45:00,198 INFO
 [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand]
 (QuartzScheduler_Worker-68) [60291e0d] Running command:
 HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities
 affected :  ID: 848bcff0-ae2a-11e1-bb49-5254001498c4 Type: VDS
 
 However in my ovirt node,  the ovirtmgmt interface was there.
 
 [root@ovirt-node1 ~]# brctl show
 bridge name bridge id   STP enabled interfaces
 ovirtmgmt   8000.5cf3fce432a0   no  eth0
 [root@ovirt-node1 ~]#
 
 [root@ovirt-node1 ~]# ifconfig -a|more
 bond0 Link encap:Ethernet  HWaddr 00:00:00:00:00:00
BROADCAST MASTER MULTICAST  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
 
 bond1 Link encap:Ethernet  HWaddr 00:00:00:00:00:00
BROADCAST MASTER MULTICAST  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
 
 bond2 Link encap:Ethernet  HWaddr 00:00:00:00:00:00
BROADCAST MASTER MULTICAST  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
 
 bond3 Link encap:Ethernet  HWaddr 00:00:00:00:00:00
BROADCAST MASTER MULTICAST  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
 
 bond4 Link encap:Ethernet  HWaddr 00:00:00:00:00:00
BROADCAST MASTER MULTICAST  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
 
 eth0  Link encap:Ethernet  HWaddr 5C:F3:FC:E4:32:A0
inet6 addr: fe80::5ef3:fcff:fee4:32a0/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
RX packets:315 errors:0 dropped:0 overruns:0 frame:0
TX packets:143 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:30785 (30.0 KiB)  TX bytes:38501 (37.5 KiB)
Interrupt:28 Memory:9200-92012800
 
 eth1  Link encap:Ethernet  HWaddr 5C:F3:FC:E4:32:A2
BROADCAST MULTICAST  MTU:1500  Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
Interrupt:40 Memory:9400-94012800
 
 loLink encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:13156 errors:0 dropped:0 overruns:0 frame:0
TX packets:13156 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0

Re: [Engine-devel] custom properties sheet feature page

2012-05-23 Thread Simon Grinberg


- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Einav Cohen eco...@redhat.com
 Cc: Andrew Cathrow acath...@redhat.com, Eldan Hildesheim 
 i...@eldanet.com, engine-devel@ovirt.org, Simon
 Grinberg sgrin...@redhat.com, Eldan Hildesheim ehild...@redhat.com
 Sent: Friday, May 18, 2012 1:50:17 AM
 Subject: Re: [Engine-devel] custom properties sheet feature page
 
 On 05/17/2012 04:08 PM, Einav Cohen wrote:
 ...
  Hi,
 
  Please review/comment on the Custom Properties Sheet feature
  page:
  http://www.ovirt.org/wiki/Features/CustomPropertiesSheet
 
 
  It looks great.
  Are all the keys going to be exposed in the dropdown, or will we
  have
  private keys that the user has to know about?
 
  All keys will be exposed; not sure what you mean by private, but
  all keys are treated the same today.
  If we want some kind of differentiation between the keys, it is a
  another feature...
  [I could, of course, be missing something, please clarify if I did]
 
 
 in the future, we may want to give permissions to which users are
 allowed to use which custom properties.
 not relevant for now.

Is this cool looking design be also available from the user portal?
If so how do you prevent any user that just have permission to Edit VMs to do 
damage? With custom property you can do almost anything.

Consider the case where there is a hook that allows to directly attach a 
LUN/Add Tap/more destructive options. It is intended for use of the sys admins 
but any user can use.

You would say correctly that this was always the case. But with the old textbox 
interface the user would need to know that the option exists. Now we actually 
tell him what he can use.

Cool for the webadmin / kind'a dangerous from the user portal until you get 
permissions per users feature for it.

The minimum that is needed is an option to disable this properties tab in the 
user portal.
Better have MLA for using properties at all if per property can't be 
accommodated.

___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


[Engine-devel] CPU topology in the API

2012-04-30 Thread Simon Grinberg
Hi list,

The current status is that though they look the same, CPU topology for hosts 
and VMs differ. 

In both you have
topology cores=N sockets=Y/

for hosts: Cores = Total cores on the host, Y=number of sockets 
for VMs: Cores = Cores per socket, Y=number of sockets

This means that for a hosts that has 4 sockets with 8 cores per socket the 
topology will be presented as:
topology cores=32 sockets=4/
While for VM with the same requested topology will show:
topology cores=8 sockets=4/

I think they should not be different to avoid confusion but:

* The information we displayed for the host can't count on the fact that cores 
are
distributed evenly across sockets, because theoretically a host could contain
several different CPUs, so it can't be displayed as a multiplication. 
* On the other hand changing the VM topology will break the API though it will 
make it aligned both with hosts and with the 'New VM' dialogue in the UI.

For oVirt 3.x it may be that nothing can be changed however for oVirt it is 
allowed in theory to break the API (a bit :)) so the options as I see it are:

1. Don't touch, leave the confusion. 
2. Make host align to VMs with the risk that on some hosts this may be a bit 
misleading - should be rare.
3. Make host topology looks like VM but allow multiple CPU topologies in the 
CPUs sub collection of the host. 
   (This also requires change in VDSM API)
4. Align VMs to Hosts

I would go for 4 or 2
Current CPU topology for the hosts is a new commit, thus it may be allowed to 
change it now since no one is using it yet. This works in favour of 2. In any 
case only 3 discloses all the information in all possible cases.  


Thoughts?


Thanks, 
Simon.
 
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel


Re: [Engine-devel] Extend host_nics action with UpdateNetworks

2012-03-27 Thread Simon Grinberg

- Original Message - 

 From: avi tal a...@redhat.com
 To: engine-devel@ovirt.org
 Cc: Oded Ramraz oram...@redhat.com, Roy Golan
 rgo...@redhat.com, michael pasternak mpast...@redhat.com,
 Simon Grinberg si...@redhat.com
 Sent: Tuesday, March 27, 2012 2:11:27 PM
 Subject: Extend host_nics action with UpdateNetworks

 Hi,
 I am checking the SetupNetwork feature in 3.1 (si01) and I think we
 are missing a very important implementation.
 SetupNetworks in vdsm layer is able to receive a nics collection
 which describe which nics will be updated and handle only those
 nics.
 This implementation is missing from engine-backend because backend
 automatically treat missing nics as nics we would like to remove.
 A very buggy scenario would be, missing mgmt network.

 The idea is to send (via REST API) a collection which contain only
 the interfaces (nics) we would like to update.

 This is actually an UPDATE collection procedure. it could be added as
 a different action:
 http://engine ip:port/hosts/id/nics/ updatenetworks

 I believe it is a part of the SetupNetwork feature that needs to be
 in 3.1.

+1

While for web UI it makes sense to do it in Read/Modify/Write fashion, meaning 
that the setup network dialogue shows the complete picture, allows the user to 
modify and then write it back. It does not make sense to force the same on an 
API user.

Consider a user that just wants to remove/add two logical networks on bond X. 
Does he really have to read it all, then add the networks, then send back? It 
makes sense to explicitly say update: add only those two on bond X - do not 
touch the rest.

This means though that the updateNetworks command has to explicitly say 
remove/add/update per network.  

 Thanks
___
Engine-devel mailing list
Engine-devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel