Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

2014-04-26 Thread Moti Asayag
Regarding the UI mockup, I'd suggest having a checkbox next to the mac ranges,
when the data center has no range (meaning the global in use) the checkbox is
unchecked and the value of that text box will show the global ranges, disabled.

In order to specify a specific range, the user will have to check that checkbox
and modify the range (same behaviour as in edit vm interface dialog).

I'd also recommend a tool tip with an example for the user (maybe with hovering 
the question mark icon).

- Original Message -
> From: "Martin Mucha" 
> To: "Sven Kieske" 
> Cc: de...@ovirt.org, users@ovirt.org
> Sent: Tuesday, April 22, 2014 11:04:31 AM
> Subject: Re: [ovirt-devel] [ovirt-users] Feature Page: Mac Pool per DC
> 
> Hi,
> 
> I like to answer questions. Presence of questions in "motivated environment"
> means that there is flaw in documentation/study material, which needs to be
> fixed :)
> 
> To answer your question.
> You got pool you want to use -- either global one (explicitly using method
> org.ovirt.engine.core.bll.network.macPoolManager.ScopedMacPoolManager#defaultScope())
> or related to some scope, which you identify somehow -- like in previous
> mail: "give me pool for this data center". When you have this pool, you can
> allocate *some* new mac (system decides which one it will be) or you can
> allocate *explicit* one, use MAC address you've specified. I think that the
> latter is what you've meant by "assigning by hand". There is just
> performance difference between these two allocation. Once the pool, which
> has to be used, is identified, everything which comes after it happens on
> *this* pool.
> 
> Example(I'm using naming from code here, storagePool is a db table for data
> center):
> ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool().addMac("00:1a:4a:15:c0:fe");
> 
> Lets discuss parts from this command:
> 
> ScopedMacPoolManager.scopeFor() // means "I want scope ..."
> ScopedMacPoolManager.scopeFor().storagePool(storagePoolId)   //... which is
> related to storagePool and identified by storagePoolID
> ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool()//...
> and I want existing pool for this scope
> ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool().addMac("00:1a:4a:15:c0:fe")
> //... and I want to add this mac address to it.
> 
> So in short, whatever you do with pool you get anyhow, happens on this pool
> only. You do not have code-control on what pool you get, like if system is
> configured to use single pool only, then request for datacenter-related pool
> still return that sole one, but once you have that pool, everything happen
> on this pool, and, unless datacenter configuration is altered, same request
> in future for pool should return same pool.
> 
> Now small spoiler(It's not merged to production branch yet) -- performance
> difference between allocating user provided MAC and MAC from mac pool range:
> You should try to avoid to allocate MAC which is outside of ranges of
> configured mac pool(either global or scoped one). It's perfectly OK, to
> allocate specific MAC address from inside these ranges, actually is little
> bit more efficient than letting system pick one for you. But if you use one
> from outside of those ranges, your allocated MAC end up in less memory
> efficient storage(approx 100 times less efficient). So if you want to use
> user-specified MACs, you can, but tell system from which range those MACs
> will be(via mac pool configuration).
> 
> M.
> 
> - Original Message -
> From: "Sven Kieske" 
> To: "Martin Mucha" , "Itamar Heim" 
> Cc: users@ovirt.org, de...@ovirt.org
> Sent: Tuesday, April 22, 2014 8:31:31 AM
> Subject: Re: [ovirt-devel] [ovirt-users] Feature Page: Mac Pool per DC
> 
> Hi,
> 
> thanks for the very detailed answers.
> 
> So here is another question:
> 
> How are MACs handled which got assigned "by hand"?
> Do they also get registered with a global or with
> the datacenter pool?
> Are they tracked anyway?
> I'm currently assigning macs via API directly
> to the vms and do not let ovirt decide itself
> which mac goes where.
> 
> Am 18.04.2014 12:17, schrieb Martin Mucha:
> > Hi,
> > 
> > I'll try to describe it little bit more. Lets say, that we've got one data
> > center. It's not configured yet to have its own mac pool. So in system is
> > only one, global pool. We create few VMs and it's NICs will obtain its MAC
> > from this global pool, marking them as used. Next we alter data center
> > definition, so now it uses it's own mac pool. In system from this point on
> > exists two mac pools, one global and one related to this data center, but
> > those allocated MACs are still allocated in global pool, since new data
> > center creation does not (yet) contain logic to get all assigned MACs
> > related to this data center and reassign them in new pool. However, after
> > app restart all VmNics are read from db and placed to appropriate pools.
> > Lets assume, that we've perfo

Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

2014-04-26 Thread Moti Asayag


- Original Message -
> From: "Martin Mucha" 
> To: "Itamar Heim" 
> Cc: users@ovirt.org, de...@ovirt.org
> Sent: Thursday, April 24, 2014 12:58:37 PM
> Subject: Re: [ovirt-devel] [ovirt-users] Feature Page: Mac Pool per DC
> 
> >no. you don't change mac addresses on the fly.
> ok, I was just asking if that's an option. No reallocating.
> 
> >i don't see why you need to keep it in memory at all?
> What I did is not a rewrite, but alteration of existing code -- I just add
> one layer above existing pool implementation. I'm not sure about that, that
> code existed before I start working on it; one explanation could be, that if
> duplicates are not allowed in config, we want to check user input and detect
> when he tries to add same mac address twice. Yes, *this* can be done using
> simple db query. I'll check that out, I'm not sufficiently aware of context
> to be able to say confident "can be removed"/"must stay".

As Itamar stated, if a custom mac address was allocated out-of-range, once that
mac address is released (by removing the vm, deleting its vnic or by changing it
to other mac address), we don't need to preserve it anywhere in the system.
Therefore it will not acquire any memory/management consideration.

While in previous implementation (before this feature) we were able to reach 
that
situation only by providing a custom mac address, with the new feature, such 
situation may occur by modifying an existing range on the data-center level.

For example, a user define a data-center mac range of 00:00-00:20 and allocated
a mac address of 00:15 (from range) to a vm.
Next the user has reduced the range to 00:00-00:10, followed by removing that 
vm.
mac 00:15 is no longer in user, by there is no meaning for it any more as from
the data-center mac scope point of view.

> 
>   
> currently it works like this: you identify pool you want and got some(based
> on system config). You release (free) mac from this pool without any care
> what type of mac it is. Method returns 'true' if it was released (== count
> of it's usages reaches zero or was not used at all). I think it does what
> you want, maybe with little less client code involvement. If client code
> provided wrong pool identification or releasing not used mac then it's a
> coding error and all we can do is log it.
> 
> >remember, you have to check the released mac address for the specific
> >associated mac_pool, since we do (read: should[1]) allow overlapping mac
> >addresses (hence ranges) in different mac_pool.
> 
> there's no "free user specified mac address" method. There's only "freeMac"
> method. So the flow is like this: you identify pool somehow. By nic, for
> which you're releasing mac, by datacenter id, you name it. Then you release
> mac using freeMac method. If it was used, it'll be released; if it was used
> multiple times, usage count is decreased. I do not see how is overlapping
> with another pools related to that. You identified pool, freed mac from it,
> other pools remain intact.
> 

When the global pool is the only one in use, there was no option to add the 
same mac address twice (blocked by AddVmInterface.canDoAction()).
It doesn't look the same case with the new implementation, where each 
data-center
scoped has its own mac storage. So this changes the previous behavior.
Suppose couple data-centers share the same physical network - it may lead to
issues where couple vms on the same network has the same mac.

> ---
> about cases you mentioned:
> I'll check whether those mac addresses, which were custom obnes and after
> ranges alteration lies in the ranges of mac pool, those get marked as used
> in that pool. It should be true, but I rather write test for it.
> 
> M.
> 
> - Original Message -
> From: "Itamar Heim" 
> To: "Martin Mucha" 
> Cc: users@ovirt.org, de...@ovirt.org
> Sent: Wednesday, April 23, 2014 10:32:33 PM
> Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC
> 
> On 04/23/2014 11:12 AM, Martin Mucha wrote:
> > Hi,
> >
> > I was describing current state, first iteration. Need of restart is
> > something which should not exist, I've removed that necessity meantime.
> > Altered flow: You allocate mac address for nic in data center without own
> > pool, it gets registered in global pool. Then you modify settings of that
> > data center so that new pool is created for it. All NICs for that data
> > center is queries from DB, it's macs released from global pool and added
> > to data center scope pool. And other way around. When you delete this
> > scoped pool, all its content will be moved to global pool. Feature page is
> > updated.
> >
> > Note: *previously* there was MAC placed in wrong pool only after
> > modification of existing data center, which caused entirely new pool to be
> > created (there wasn't pool for this scope, after modification there is).
> > All other operations were fine. Now all manipulation with scoped pools
> > should be ok.
> >
> > Note2: all that scoped pool handling i

Re: [ovirt-users] Network Security / Seperation

2014-04-26 Thread Moti Asayag


- Original Message -
> From: "squadra" 
> To: users@ovirt.org
> Sent: Thursday, April 24, 2014 10:08:55 AM
> Subject: [ovirt-users] Network Security / Seperation
> 
> Hi Folks,
> 
> i am currently looking for a way to isolate each vms network traffic
> so none can sniff others network traffic. currently i am playing
> around with the neutron integration, which gives me more question
> marks than answers for now (even documentation seems to be incomplete
> / outdated).
> 
> Is there any other solution, which does not require to create a new
> vlan for each vm, to make sure that noone can sniff others traffic?
> 

Could you explain why the basic functionality provided by ovirt and vdsm
doesn't meet you needs ? You can define vlans within ovirt, regardless 
ovirt-neutron integration.

> Cheers,
> 
> Juergen
> 
> --
> Sent from the Delta quadrant using Borg technology!
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Virtio-SCSI passthough on SCSI - Devices

2014-04-26 Thread Itamar Heim

On 04/25/2014 08:01 PM, Daniel Helgenberger wrote:


On Fr, 2014-04-25 at 17:19 +0200, Jiri Belka wrote:

On Fri, 25 Apr 2014 13:40:09 +
Daniel Helgenberger  wrote:


Hello,

does anyone have an idea on how to accomplish this? In my particular
setup, I need a FC tape drive passed though to the vm.
Note, passing throuh FC - LUNs works flawlessly.

If I understood Virtio -SCSI correctly, this should be possible from
libvirt's part.


I can be wrong but my understanding is that dm-mpio works on block
layer thus it does not support multipath for tapes/cd-devices.

But I could be wrong, I got this info from an OpenBSD paper comparing
SCSI multipath implementation.

j.

No, I think so too - it does not support tape drive as such (but I could
set up a tape library as a LUN to use it with LTFS for instance)

The point being is another. VirtIO SCSI should be able to pass through
any scsi device; like tape drives end enclosures. I know this works in
Proxmox: http://pve.proxmox.com/wiki/Tape_Drives

Does it work in oVirt, too? In the GUI I only see block LUNs from DM.


i suggest you try making it work via libivrt first, then we can compare 
to the xml passed to the guest, workaround with a custom hook, etc.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users