Re: [ovirt-users] MAC Address is already in use; several of times before Nic/MAC is acceppetd

2017-04-06 Thread Martin Mucha
Hi,

you're the second (if I'm counting correctly) person able to reproduce
this. So far we do not know what is causing this, and all tests we made so
far cannot reproduce this. Previous unlucky user had problem, that he had
less available macs in MAC pool, than actual nics in system. This does not
answer the question how he was able to reach this situation, but he was
able to greatly reduce this issue via making MAC pool bigger.

You said, that you added extra pool. Which pool are you referring to? MAC
pool? VM pool? Please describe your environment and ideally provide us with
logs. How many VMs do you have in each VM pool? How many vm_interfaces do
you have in your system in total? How many MAC pools do you have in your
system and what is its configuration?

thanks,
Martin.

On Wed, Apr 5, 2017 at 10:34 PM, Matt .  wrote:

> Hi Guys,
>
> Since the upgrade to 4.1 I have the issue that every nic I add I get a
> several of times:
>
> MAC Address xx:xx:xx:xx:xx is already in use.
>
> I have added an extra pool and this issue still exists.
>
> Anyone a clue why this happens ?
>
> Thanks,
>
> Matt
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt + OpenVSwitch

2016-04-06 Thread Martin Mucha
Hi,

I think OpenVSwitch should be supported in 4.0.

M.

- Original Message -
> Has anybody succeeded in installing Ovirt 3.6 with hosted engine on a
> server which uses OpenVSwitch for the network config?
> 
> I believe my issue is that Ovirt wants to control the network to create
> a bridge for its management and I wants it to just use whatever network
> is available on the host without trying to be clever about it. I was
> able to tweak it to get to the final stage where it fails on waiting for
> the engine to start.
> 
> Best regards
> Sverker
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt 3.6.2 -> NPE while clicking on "Setup Host Networks"

2016-01-28 Thread Martin Mucha
I think this issue is real and it seems to be caused by unboxing of nullvalued 
integer obtained from function 
org.ovirt.engine.core.common.businessentities.HostDevice#getTotalVirtualFunctions

M.

- Original Message -
> Hi List,
> 
> I update my OVirt 3.6.1 installation to 3.6.2 this morning.
> After the reboot of the server the interface numbering wasn't the same
> so I had to change some cable to have them connected to the right
> configuration.
> 
> Everything looks OK but I receive a null pointer exception while
> clicking on "Setup Host Networks" of my engine/host server.
> 
> Any help is more then welcome!!
> 
> Best regards
> Christoph
> 
> 2016-01-27 19:04:47,738 ERROR
> [org.ovirt.engine.core.bll.network.host.GetAllVfsConfigByHostIdQuery]
> (default task-482) [] Exception: java.lang.NullPointerException
>  at
> org.ovirt.engine.core.bll.network.host.NetworkDeviceHelperImpl.getMaxNumOfVfs(NetworkDeviceHelperImpl.java:172)
> [bll.jar:]
>  at
> org.ovirt.engine.core.bll.network.host.NetworkDeviceHelperImpl.updateVfsConfigWithNumOfVfsData(NetworkDeviceHelperImpl.java:136)
> [bll.jar:]
>  at
> org.ovirt.engine.core.bll.network.host.NetworkDeviceHelperImpl.getHostNicVfsConfigsWithNumVfsDataByHostId(NetworkDeviceHelperImpl.java:121)
> [bll.jar:]
>  at
> org.ovirt.engine.core.bll.network.host.GetAllVfsConfigByHostIdQuery.executeQueryCommand(GetAllVfsConfigByHostIdQuery.java:19)
> [bll.jar:]
>  at
> org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:82)
> [bll.jar:]
>  at
> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
> [dal.jar:]
>  at
> org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:537) [bll.jar:]
>  at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:511)
> [bll.jar:]
>  at sun.reflect.GeneratedMethodAccessor75.invoke(Unknown Source)
> [:1.8.0_71]
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_71]
>  at java.lang.reflect.Method.invoke(Method.java:497)
> [rt.jar:1.8.0_71]
>  at
> org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
>  at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
>  at
> org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
>  at
> org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
>  at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
>  at
> org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407)
>  at
> org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70)
> [wildfly-weld-8.2.1.Final.jar:8.2.1.Final]
>  at
> org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80)
> [wildfly-weld-8.2.1.Final.jar:8.2.1.Final]
>  at
> org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93)
> [wildfly-weld-8.2.1.Final.jar:8.2.1.Final]
>  at
> org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
>  at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
>  at
> org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407)
>  at
> org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13)
> [bll.jar:]
>  at sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source)
> [:1.8.0_71]
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_71]
>  at java.lang.reflect.Method.invoke(Method.java:497)
> [rt.jar:1.8.0_71]
>  at
> org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89)
>  at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
>  at
> org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
>  at
> org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
>  at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
>  at
> org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
> [wildfly-ejb3-8.2.1.Final.jar:8.2.1.Final]
>  at
> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
>  at
> 

Re: [ovirt-users] OVirt 3.6.2 -> NPE while clicking on "Setup Host Networks"

2016-01-28 Thread Martin Mucha
ok, thanks. Just o inform you — so far it seems there is inconsistency in 
assumptions of what can be nullable and what can not. We just need to decide 
which approach is correct and fix rest. 
M.

- Original Message -
> Hi Martin,
> 
> please let me know I can help.
> 
> Best regards
> Christoph
> 
> Am 28.01.2016 um 09:04 schrieb Martin Mucha:
> > I think this issue is real and it seems to be caused by unboxing of
> > nullvalued integer obtained from function
> > org.ovirt.engine.core.common.businessentities.HostDevice#getTotalVirtualFunctions
> >
> > M.
> >
> > - Original Message -
> >> Hi List,
> >>
> >> I update my OVirt 3.6.1 installation to 3.6.2 this morning.
> >> After the reboot of the server the interface numbering wasn't the same
> >> so I had to change some cable to have them connected to the right
> >> configuration.
> >>
> >> Everything looks OK but I receive a null pointer exception while
> >> clicking on "Setup Host Networks" of my engine/host server.
> >>
> >> Any help is more then welcome!!
> >>
> >> Best regards
> >> Christoph
> >>
> >> 2016-01-27 19:04:47,738 ERROR
> >> [org.ovirt.engine.core.bll.network.host.GetAllVfsConfigByHostIdQuery]
> >> (default task-482) [] Exception: java.lang.NullPointerException
> >>   at
> >> org.ovirt.engine.core.bll.network.host.NetworkDeviceHelperImpl.getMaxNumOfVfs(NetworkDeviceHelperImpl.java:172)
> >> [bll.jar:]
> >>   at
> >> org.ovirt.engine.core.bll.network.host.NetworkDeviceHelperImpl.updateVfsConfigWithNumOfVfsData(NetworkDeviceHelperImpl.java:136)
> >> [bll.jar:]
> >>   at
> >> org.ovirt.engine.core.bll.network.host.NetworkDeviceHelperImpl.getHostNicVfsConfigsWithNumVfsDataByHostId(NetworkDeviceHelperImpl.java:121)
> >> [bll.jar:]
> >>   at
> >> org.ovirt.engine.core.bll.network.host.GetAllVfsConfigByHostIdQuery.executeQueryCommand(GetAllVfsConfigByHostIdQuery.java:19)
> >> [bll.jar:]
> >>   at
> >> org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:82)
> >> [bll.jar:]
> >>   at
> >> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
> >> [dal.jar:]
> >>   at
> >> org.ovirt.engine.core.bll.Backend.runQueryImpl(Backend.java:537)
> >> [bll.jar:]
> >>   at org.ovirt.engine.core.bll.Backend.runQuery(Backend.java:511)
> >> [bll.jar:]
> >>   at sun.reflect.GeneratedMethodAccessor75.invoke(Unknown Source)
> >> [:1.8.0_71]
> >>   at
> >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >> [rt.jar:1.8.0_71]
> >>   at java.lang.reflect.Method.invoke(Method.java:497)
> >> [rt.jar:1.8.0_71]
> >>   at
> >> org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
> >>   at
> >> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
> >>   at
> >> org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
> >>   at
> >> org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
> >>   at
> >> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
> >>   at
> >> org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407)
> >>   at
> >> org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70)
> >> [wildfly-weld-8.2.1.Final.jar:8.2.1.Final]
> >>   at
> >> org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80)
> >> [wildfly-weld-8.2.1.Final.jar:8.2.1.Final]
> >>   at
> >> org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93)
> >> [wildfly-weld-8.2.1.Final.jar:8.2.1.Final]
> >>   at
> >> org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
> >>   at
> >> org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
> >>   at
> >> org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.j

Re: [ovirt-users] MAC address recycling

2015-10-01 Thread Martin Mucha


- Original Message -
> 
> 
> On 27.09.2015 12:25, Martin Mucha wrote:
> > Hi,
> > 
> > danken, I do not remember I saw such a bug.
> > 
> > In 3.5 and 3.6 there was some changes in MAC pool implementation and usage
> > in system, but order, in which macs are assigned remained unchanged. Yes,
> > if you request MAC from pool, return it, and request again, you will
> > always end up with same mac.
> > 
> > when looking for available mac in pool, we iterate through available ranges
> > selecting first one with some available mac:
> > org.ovirt.engine.core.bll.network.macpoolmanager.MacsStorage#getRangeWithAvailableMac
> > 
> > and select 'leftmost' available mac from it:
> > org.ovirt.engine.core.bll.network.macpoolmanager.Range#findUnusedMac
> 
> Thanks clearing up this behaviour!
> 
> I would open an RFE?

Yes, please. I'm currently on PTO, and this has to be planned anyways (and 
that's not done by me). Please specify there as well whether depicted solution 
I wrote about in last mail would be fine to you, or whether you actually need 
some delaying. I believe new methods for replacing should be ideal solution, 
but I don't know all your constraints and I did not look sufficiently 
thoroughly into code to be sure how easy/problematic it is. 

> 
> > 
> > I understand your problem, we can either a) impose some delay for returning
> > macs to pool or b)randomize acquiring macs, but we should as well specify,
> > how should system behave, when there are not sufficient macs in system.
> > Since when there are small amount of macs left, a) will block other
> > requests for mac while there's no need to do so and b) will return same
> > mac anyways, if there's just one left. And even worse, with low number of
> > available mac (for example 5) and randomized selection it may work/fail
> > unpredictably.
> > 
> > Maybe more proper would be creating new method on mac pool, requiring 'mac
> > renew/replace' — that's the actual usecase you need; not
> > delaying/randomizing. You need different mac. Method like "I want some MAC
> > address(es), but not this one(s); Returned mac addresses would be
> > immediately available for others, and search for another mac can sensibly
> > fail (this mac address cannot be replaced, there isnt another one)
> > 
> > M.
> > 
> > - Original Message -
> >> On Thu, Sep 24, 2015 at 01:39:42PM +, Daniel Helgenberger wrote:
> >>> Hello,
> >>>
> >>> I recently experienced an issue with mac address uses in ovirt with
> >>> foreman[1].
> >>>
> >>> Bottom line, a mac address was recycled causing an issue where I could
> >>> not
> >>> rebuild a host because of a stale DHCP reservation record.
> >>>
> >>> What is the current behavior regarding the reuse of MAC addresses for new
> >>> VMs?
> >>> Can I somehow delay a the recycle of a MAC?
> >>
> >> Martin, I recall we had a bug asking not to immediately re-use a
> >> recently-released MAC address. Was this possible?
> >>
> > 
> 
> --
> Daniel Helgenberger
> m box bewegtbild GmbH
> 
> P: +49/30/2408781-22
> F: +49/30/2408781-10
> 
> ACKERSTR. 19
> D-10115 BERLIN
> 
> 
> www.m-box.de  www.monkeymen.tv
> 
> Geschäftsführer: Martin Retschitzegger / Michaela Göllner
> Handeslregister: Amtsgericht Charlottenburg / HRB 112767
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] MAC address recycling

2015-09-27 Thread Martin Mucha
Hi,

danken, I do not remember I saw such a bug. 

In 3.5 and 3.6 there was some changes in MAC pool implementation and usage in 
system, but order, in which macs are assigned remained unchanged. Yes, if you 
request MAC from pool, return it, and request again, you will always end up 
with same mac.

when looking for available mac in pool, we iterate through available ranges 
selecting first one with some available mac:
org.ovirt.engine.core.bll.network.macpoolmanager.MacsStorage#getRangeWithAvailableMac

and select 'leftmost' available mac from it:
org.ovirt.engine.core.bll.network.macpoolmanager.Range#findUnusedMac

I understand your problem, we can either a) impose some delay for returning 
macs to pool or b)randomize acquiring macs, but we should as well specify, how 
should system behave, when there are not sufficient macs in system. Since when 
there are small amount of macs left, a) will block other requests for mac while 
there's no need to do so and b) will return same mac anyways, if there's just 
one left. And even worse, with low number of available mac (for example 5) and 
randomized selection it may work/fail unpredictably.

Maybe more proper would be creating new method on mac pool, requiring 'mac 
renew/replace' — that's the actual usecase you need; not delaying/randomizing. 
You need different mac. Method like "I want some MAC address(es), but not this 
one(s); Returned mac addresses would be immediately available for others, and 
search for another mac can sensibly fail (this mac address cannot be replaced, 
there isnt another one)

M.

- Original Message -
> On Thu, Sep 24, 2015 at 01:39:42PM +, Daniel Helgenberger wrote:
> > Hello,
> > 
> > I recently experienced an issue with mac address uses in ovirt with
> > foreman[1].
> > 
> > Bottom line, a mac address was recycled causing an issue where I could not
> > rebuild a host because of a stale DHCP reservation record.
> > 
> > What is the current behavior regarding the reuse of MAC addresses for new
> > VMs?
> > Can I somehow delay a the recycle of a MAC?
> 
> Martin, I recall we had a bug asking not to immediately re-use a
> recently-released MAC address. Was this possible?
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Caused by: java.lang.IllegalStateException: Insufficient amount of free MACs.

2015-02-18 Thread Martin Mucha
Hi, 

so the syntax of MAC addresses ranges is ok, log confirmed that I was looking 
at the right place. I tried to do some tests with pool repeatedly depleting all 
MAC addresses in random order and then putting all of them back, and everytime 
I got back to empty pool, so it seems that pool itself does not leak. I'll try 
to find potential problem, but I'll ask first one more question Martin (the 
other one) suggested. Did you restart your engine prior to these operations? 
Isn't possible, that you have it configured with some much smaller range and 
forgot to restart prior to using new range? Also can you give me version of 
your system? Newer versions do set ranges differently than via engine-config 
(also without need to restart).

M.

- Original Message -
 Hi Martin,
 
 I am using the below mac address pool ranges :-  http://ur1.ca/jqudg
 
 Engine Log :- http://ur1.ca/jquep
 
 I have created almost 8 OS templates and every template has 2 NIC...and for
 the same i have added the mac address poolthe total mac address should
 be 4072 and i have used in my system is about 1580 mac addressbut now
 when i create new VM it failed with the error Insufficient amount of free
 MACs
 
 Thanks,
 Punit
 
 
 
 
 On Tue, Feb 17, 2015 at 7:29 PM, Martin Mucha mmu...@redhat.com wrote:
 
  Hi,
 
  I'm able to track down responsible code using provided error message. I'm
  not sure what deploy the VM with Template is, but I suspect you've
  imported VM from template. When that happens, for each interface without
  MAC address is new obtained, or for each of them, when importing as new.
  I did not see (so far) anything bad in code. Can you provide me with some
  details I can verify or work with? Details about that VM (number of it's
  nics or anything else you think it can be important), defined MAC address
  range, who else is using this MAC address range, etc.
 
  Mar.
 
  - Original Message -
   Hi,
  
   I am facing this strange issue if i deploy the VM with Template...
  
   Caused by: java.lang.IllegalStateException: Insufficient amount of free
  MACs.
  
   Actually i have almost 2000 Mac address free in our environment but VM
   creation failed with this error :-
  
   Thanks,
   Punit
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Caused by: java.lang.IllegalStateException: Insufficient amount of free MACs.

2015-02-17 Thread Martin Mucha
Hi,

I'm able to track down responsible code using provided error message. I'm not 
sure what deploy the VM with Template is, but I suspect you've imported VM 
from template. When that happens, for each interface without MAC address is new 
obtained, or for each of them, when importing as new. I did not see (so far) 
anything bad in code. Can you provide me with some details I can verify or work 
with? Details about that VM (number of it's nics or anything else you think it 
can be important), defined MAC address range, who else is using this MAC 
address range, etc.

Mar.

- Original Message -
 Hi,
 
 I am facing this strange issue if i deploy the VM with Template...
 
 Caused by: java.lang.IllegalStateException: Insufficient amount of free MACs.
 
 Actually i have almost 2000 Mac address free in our environment but VM
 creation failed with this error :-
 
 Thanks,
 Punit
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] reporting and removing unmanaged networks after deprecating org.ovirt.engine.core.common.action.VdcActionType#SetupNetworks

2015-02-02 Thread Martin Mucha
Hi,

I'd like to discuss how to properly report unmanaged networks and how to ask 
for their removal after 
org.ovirt.engine.core.common.action.VdcActionType#SetupNetworks
is removed.

We thought about several possibilities and so far the best one is following 
one. 

Reporting unmanaged networks on specific nic: 
———

We'd like to return new collection under:
GET http://localhost:8080/api/hosts/{id}/nics/{id}/unmanagednetworks

returning (reporting) unmanaged networks like this:

unmanaged_networks
unmanaged_network
nic_name.../nic_name
unmanaged_network_name.../unmanaged_network_name
vland_id.../vland_id
/unmanaged_network

unmanaged_network
 ...
/unmanaged_network
/unmanaged_networks


Removing unmanagedNetworks:
—

DELETE 
http://localhost:8080/api/hosts/{id}/nics/{id}/unmanagednetworks/{unmanaged_network_name}


===

any ideas, hints, complaints, recommendations, confirmations are welcomed.
Martin.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New Feature: engine NIC health check

2014-10-13 Thread Martin Mucha
ad a)
ab b) good, I will make it configurable via engine-config.

ad c) ok, we probably should think about what should be in that table. I've 
proposed, that only failures should be recorded. Since there's currently no use 
for other values/NIC states, and NICs will be probably functional most of the 
time. This will minimize the need to write anything to db, and therefore name 
EngineNicFailures. I also wanted to store information in format: This NIC, 
was not up during check at '.MM.dd HH:mm:ss '. Which gives more 
information than what you're talking about. But there's no need for this data, 
so yes currently it'd be possible to store NIC status along with last failure 
timestamp. So the table could look like:

CREATE TABLE nics_health ( id UUID primary key, name VARCHAR(255), is_healthy 
BOOL, last_failure TIMESTAMP);

this way, we have to update record during each health check, while in 
previously proposed [by me] approach only when there's failure, and 'purge' of 
obsolete data can occur only when there's fence request and/or on some time 
period. What you're proposing provides less information (which is fine, since 
we don't need them), it's simpler, but it generates some unnecessary load for 
db. It's not big deal, but it's not optimal/necessary, since state, when nic is 
not up is not probable(if I understand it correctly).

Sorry, have no idea what DWH stands for.

ad d) I have no problem extending this feature with providing data over rest, 
but others should probably agree with that. I don't think any client(external 
system) need this information, and I also don't think engine nic health is any 
clients business in first place.

——
I will update feature page asap.
M.


- Original Message -
From: Yair Zaslavsky yzasl...@redhat.com
To: Martin Mucha mmu...@redhat.com
Cc: engine-de...@ovirt.org, users@ovirt.org
Sent: Friday, October 10, 2014 3:14:37 PM
Subject: Re: [ovirt-users] New Feature: engine NIC health check



- Original Message -
 From: Martin Mucha mmu...@redhat.com
 To: engine-de...@ovirt.org, users@ovirt.org
 Sent: Wednesday, October 8, 2014 2:33:06 PM
 Subject: [ovirt-users] New Feature: engine NIC health check
 
 Hi,
 
 here's link for new feature, related to monitoring engine's NIC, trying to
 detect failure on engine itself and it that case block fencing.
 http://www.ovirt.org/Features/engine_NIC_health_check
 
 thanks for every input, namely for one addressing some of opened issues.
 
 M.

I was curious  on how you perform the health check, so I read the feature page 
- good to learn more Java :)
Regarding open issues -
a. Yes, IMHO the scanning interval should be configured via engine-config - do 
you see a reason why not to do that? Maybe we should set a minimal interval 
value and enforSce it?
b. Same for the no faiures since.. interval
c. I dont like the name of the table you're suggesting. Please consider an 
alternative. Also you may want to consider having a view that returns you the 
static infomration of the nic + the stats part (dynamic part? maybe just 
nic_state ? ) Why would u like to purge old data and not just hold a record per 
nic and update per each interval? in this case, no purging is required.
Maybe for DWH you will want some info on the history of the status of the 
nics... but I'm not sure if this is relevant for now.
d. If you go with my view suggestion, you  might consider displaying the 
state at REST-API

Yair

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] New Feature: engine NIC health check

2014-10-08 Thread Martin Mucha
Hi,

here's link for new feature, related to monitoring engine's NIC, trying to 
detect failure on engine itself and it that case block fencing.
http://www.ovirt.org/Features/engine_NIC_health_check

thanks for every input, namely for one addressing some of opened issues.

M.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Modify the MacPoolRange : Former MACs?

2014-09-10 Thread Martin Mucha
just to add one more comment: nothing changes for MACs allocated using previous 
MacPoolRange. After MacPoolRange change, they will be treated like user 
specified MAC addresses, which also don't have to be part of any range.

m.

- Original Message -
From: Itamar Heim ih...@redhat.com
To: Nicolas Ecarnot nico...@ecarnot.net, users@ovirt.org, Lior Vernia 
lver...@redhat.com
Sent: Wednesday, September 10, 2014 12:43:12 PM
Subject: Re: [ovirt-users] Modify the MacPoolRange : Former MACs?

On 09/10/2014 09:47 AM, Nicolas Ecarnot wrote:
 Hi,

 I plan to extend the MacPoolRange.

 If I also completely change the pool, in a way that does not implies the
 previous setting, will the manager ban all the previous MACs, now
 outside the defined pool? (and stop the VMs?)

 Regards,


no, just affect future mac allocations.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.5 test day: retrying failed fencing

2014-07-30 Thread Martin Mucha
Hi, 

I've tested: Bug 1090511 [RFE] Improve fencing robustness by retrying failed 
attempts.
Spoiler alert: Tested feature worked, but fencing was not successful due to bug 
https://bugzilla.redhat.com/1124141

---

How to setup environment for testing:
- 3 hosts are required, at least two of them with PM enabled.
- 2 hosts (A, B), with pm enabled, should be with one cluster, remaining one 
(C) in another cluster. Reason for that is that search for fencing proxy is 
first done in same cluster, only if there's none host available, hosts outside 
of this cluster is considered; this separation is needed to make sure that 
right (not working) fencing proxy is selected first.

notation: 
host A ~ defective host to be fenced
host B ~ first selected fencing proxy, which will fail fencing host A.
host C ~ second selected fencing proxy, which should succeed fencing host A.
A and B are in same cluster.

process:
1. On host B we alter iptables, so it cannot contact host A and fence it. SSH 
was blocked to disallow soft fencing and ipmi was blocked to disallow 'hard' 
fencing.

iptables -A OUTPUT -p udp -d 10.34.63.198 --dport 623 -j DROP
iptables -A OUTPUT -p tcp -d 10.34.63.178 --dport 22 -j DROP

2. On host A was removed rules allowing connection to vdsm [1] and vdsm was 
restarted vdsm[2] so all ssh connections needs to be reopened. That makes 
engine think, that host is down/overloaded.
drop rule: 
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0tcp dpt:54321

followed by
systemctl restart vdsmd


Result: After restart of vdsmd engine recognised host A as iresponsive, and 
tried to fence it. First attempt to fence host A was performed by host B and 
failed as expected, second attempt to fence host A performed by host C and from 
code perspective succeeded. Error message [1] correctly displayed. However 
fence was not successful due to bug https://bugzilla.redhat.com/1124141 which 
causes java.lang.StackOverflowError. Code related to this bug should be OK, but 
will be working only after mentioned bug is fixed.

M.

[1]. Fencing operation failed with proxy host ID, trying another proxy...
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

2014-04-28 Thread Martin Mucha
Hi, 
you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL 
CODE.

Why I did it this way: I come from agile environment.
This supposed to be FIRST increment. Not last. I hate waterfall style of work 
-- almighty solution in one swing. I'd like to make sure, that main part, that 
core principle is valid and approved. Making gui look nice is marginal. So it 
is data structure for first increment. We can definitely think of thousands of 
improvements, BUT this RFC already include more than 10 patch sets and NO core 
reviews. How can I know, that others will approve this and I'm not completely 
wrong?

about UX: it's wrong, but just fine for first increment. It can be used somehow 
and that just sufficient. Note: even with table to enter each from-to range 
there can be validation problem needed to be handled. Gui can changed to better 
one, when we know, that this feature works. But meantime others can test this 
feature functionality via this ugly, but very fast to write, gui!

about DB: I'm aware of DB normalization, and about all implications my design 
has. Yes, storing it in one varchar column is DB (very heavily used) 
antipattern, just fine for first increment and very easy to fix.

If it's up to me, I'd like to wait for approval of 'core' part of this change 
(lets call it spike), and finish remaining 'marginalities' after it. (just to 
make myself clear proper db design ISN'T marginal measuring it using absolute 
scale, but it IS very marginal related to situation where most of code wasn't 
approved/reviewed yet).

m.

- Original Message -
From: Yevgeny Zaspitsky yzasp...@redhat.com
To: Martin Mucha mmu...@redhat.com
Cc: de...@ovirt.org, users@ovirt.org
Sent: Sunday, April 27, 2014 2:22:04 PM
Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC

Now for users@ovirt.org indeed.

- Original Message -
From: Yevgeny Zaspitsky yzasp...@redhat.com
To: Martin Mucha mmu...@redhat.com
Cc: us...@ovrit.org, de...@ovirt.org
Sent: Sunday, April 27, 2014 2:29:46 PM
Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC

Martin,

I'd like to propose a different approach on how the ranges to be defined and 
stored.

Discussing this feature with Moti raised the alternative UX design: 
Defining ranges could be added as a left-tab on create DC dialog and a sub-tab 
on an existing DC. It would be a table of start and end address fields and we 
can add a calculated # of MACs in the range and/or summary for the DC. 
Also that will make string parsing unneeded, prevent possible user mistakes in 
the string format and make possible validating every field of the range on the 
UI side easier.
As you can see on the screenshot you've attached even a single range doesn't 
fit to the text box. In case of multiple ranges managing them in a single line 
textbox would be very uncomfortable.

A range is an object with at least 2 members (start and end). And we have few 
of these for each data center.
Storing a collection of the objects in a single field in a relational DB seems 
a bit awkward to me. 
That has few disadvantages:
1. is not normalized
2. make data validation nearly impossible
3. make querying the data very difficult
4. is restraining our ability to extend the object (e.g. a user might like to 
give a description to a range)
So IMHO a satellite table with the FK to storage_pool would be a more robust 
design.

Best regards, 
 
Yevgeny Zaspitsky 
Senior Software Engineer 
Red Hat Israel 


- Original Message -
From: Martin Mucha mmu...@redhat.com
To: users@ovirt.org, de...@ovirt.org
Sent: Thursday, April 10, 2014 9:59:44 AM
Subject: [ovirt-devel] new feature

Hi,

I'd like to notify you about new feature, which allows to specify distinct MAC 
pools, currently one per data center.
http://www.ovirt.org/Scoped_MacPoolManager

any comments/proposals for improvement are very welcomed.
Martin.
___
Devel mailing list
de...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
de...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

2014-04-28 Thread Martin Mucha
thanks for bringing up this datatypes, I was not aware of them.

Are we allowed/supposed to use vendor specific types if appropriate to?
note: using this type will enforce a validity, right, but that does not mean 
that much (from other code perspective) since one's still obliged to do all 
checking on all other app layers avoiding calls from one layer to another with 
invalid data (calls to backend are expensive, call to db are even more 
expensive considering lot of users working simultaneously).

and will allow functionality such as comparison (is required).
maybe I do not understand this. Which mac ranges comparison is currently 
required and not possible? Either I do not get it or I'm not aware of that use 
case.

m.

- Original Message -
From: Moti Asayag masa...@redhat.com
To: Martin Mucha mmu...@redhat.com
Cc: Yevgeny Zaspitsky yzasp...@redhat.com, users@ovirt.org, de...@ovirt.org
Sent: Monday, April 28, 2014 8:21:50 AM
Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC



- Original Message -
 From: Martin Mucha mmu...@redhat.com
 To: Yevgeny Zaspitsky yzasp...@redhat.com
 Cc: users@ovirt.org, de...@ovirt.org
 Sent: Monday, April 28, 2014 9:14:38 AM
 Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
 
 Hi,
 you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL
 CODE.
 
 Why I did it this way: I come from agile environment.
 This supposed to be FIRST increment. Not last. I hate waterfall style of work
 -- almighty solution in one swing. I'd like to make sure, that main part,
 that core principle is valid and approved. Making gui look nice is marginal.
 So it is data structure for first increment. We can definitely think of
 thousands of improvements, BUT this RFC already include more than 10 patch
 sets and NO core reviews. How can I know, that others will approve this and
 I'm not completely wrong?
 
 about UX: it's wrong, but just fine for first increment. It can be used
 somehow and that just sufficient. Note: even with table to enter each
 from-to range there can be validation problem needed to be handled. Gui can
 changed to better one, when we know, that this feature works. But meantime
 others can test this feature functionality via this ugly, but very fast to
 write, gui!
 
 about DB: I'm aware of DB normalization, and about all implications my
 design has. Yes, storing it in one varchar column is DB (very heavily
 used) antipattern, just fine for first increment and very easy to fix.
 

There is another motivation for using a normalized data, specifically for
mac addresses - using the MAC addresses type [1] will enforce validity of
the input and will allow functionality such as comparison (is required).

[1] http://www.postgresql.org/docs/8.4/static/datatype-net-types.html

 If it's up to me, I'd like to wait for approval of 'core' part of this change
 (lets call it spike), and finish remaining 'marginalities' after it. (just
 to make myself clear proper db design ISN'T marginal measuring it using
 absolute scale, but it IS very marginal related to situation where most of
 code wasn't approved/reviewed yet).
 
 m.
 
 - Original Message -
 From: Yevgeny Zaspitsky yzasp...@redhat.com
 To: Martin Mucha mmu...@redhat.com
 Cc: de...@ovirt.org, users@ovirt.org
 Sent: Sunday, April 27, 2014 2:22:04 PM
 Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
 
 Now for users@ovirt.org indeed.
 
 - Original Message -
 From: Yevgeny Zaspitsky yzasp...@redhat.com
 To: Martin Mucha mmu...@redhat.com
 Cc: us...@ovrit.org, de...@ovirt.org
 Sent: Sunday, April 27, 2014 2:29:46 PM
 Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
 
 Martin,
 
 I'd like to propose a different approach on how the ranges to be defined and
 stored.
 
 Discussing this feature with Moti raised the alternative UX design:
 Defining ranges could be added as a left-tab on create DC dialog and a
 sub-tab on an existing DC. It would be a table of start and end address
 fields and we can add a calculated # of MACs in the range and/or summary for
 the DC.
 Also that will make string parsing unneeded, prevent possible user mistakes
 in the string format and make possible validating every field of the range
 on the UI side easier.
 As you can see on the screenshot you've attached even a single range doesn't
 fit to the text box. In case of multiple ranges managing them in a single
 line textbox would be very uncomfortable.
 
 A range is an object with at least 2 members (start and end). And we have few
 of these for each data center.
 Storing a collection of the objects in a single field in a relational DB
 seems a bit awkward to me.
 That has few disadvantages:
 1. is not normalized
 2. make data validation nearly impossible
 3. make querying the data very difficult
 4. is restraining our ability to extend the object (e.g. a user might like to
 give a description to a range)
 So IMHO a satellite table with the FK to storage_pool would be a more robust
 design

Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

2014-04-28 Thread Martin Mucha
Hi,

thanks for your input, I'll try to satisfy your request to be able to set range 
'width' either by 'end boundary' or 'mac count' in gui design.

Prior to that there are more fundamental decisions to be made -- like whether 
the pool definition is mandatory or optional, and how this influence the app 
for upgrading users. I'm pushing the idea of optional definition with one 
global pool as a fallback. And like I said in previous emails, from this point 
of view is gui design marginal, since we do not know what exact things should 
be displayed there(gui will be little bit different for optional pool 
definition). This is to be decided this week, after that we can discuss final 
design of gui.

m.

- Original Message -
From: Genadi Chereshnya gcher...@redhat.com
To: Moti Asayag masa...@redhat.com
Cc: de...@ovirt.org, users@ovirt.org, Martin Mucha mmu...@redhat.com, 
Martin Pavlik mpav...@redhat.com
Sent: Monday, April 28, 2014 8:47:11 AM
Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

Hi, 
We would like to propose a little bit better solution from user experience side.

We should have 3 fields for each range:
1) Start range
2) End range
3) Number of MACs
When you have to fill in End range or Number of MACs (when start range is 
mandatory).
And the 3rd field will be filled in automatically according to others.
For example:
1) If Start range is 00:00:00:00:00:01 and Number of MACs is 5 then End 
range should be filled in with 00:00:00:00:00:05.
2) If Start range is 00:00:00:00:00:01 and End range is 00:00:00:00:00:05, 
then Number of MACs should be filled in with 5. 

For update: End range and Number of MACs should be updated automatically as 
well, so if you update End range the Number of MACs should be updated and 
vice versa.

For adding several MAC pool ranges for DC we can use the + or - sign as we 
do for adding VNIC profile or Labels field.

Regards,
   Genadi








- Original Message -
From: Moti Asayag masa...@redhat.com
To: Martin Mucha mmu...@redhat.com
Cc: de...@ovirt.org, users@ovirt.org
Sent: Monday, April 28, 2014 9:21:50 AM
Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC



- Original Message -
 From: Martin Mucha mmu...@redhat.com
 To: Yevgeny Zaspitsky yzasp...@redhat.com
 Cc: users@ovirt.org, de...@ovirt.org
 Sent: Monday, April 28, 2014 9:14:38 AM
 Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
 
 Hi,
 you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL
 CODE.
 
 Why I did it this way: I come from agile environment.
 This supposed to be FIRST increment. Not last. I hate waterfall style of work
 -- almighty solution in one swing. I'd like to make sure, that main part,
 that core principle is valid and approved. Making gui look nice is marginal.
 So it is data structure for first increment. We can definitely think of
 thousands of improvements, BUT this RFC already include more than 10 patch
 sets and NO core reviews. How can I know, that others will approve this and
 I'm not completely wrong?
 
 about UX: it's wrong, but just fine for first increment. It can be used
 somehow and that just sufficient. Note: even with table to enter each
 from-to range there can be validation problem needed to be handled. Gui can
 changed to better one, when we know, that this feature works. But meantime
 others can test this feature functionality via this ugly, but very fast to
 write, gui!
 
 about DB: I'm aware of DB normalization, and about all implications my
 design has. Yes, storing it in one varchar column is DB (very heavily
 used) antipattern, just fine for first increment and very easy to fix.
 

There is another motivation for using a normalized data, specifically for
mac addresses - using the MAC addresses type [1] will enforce validity of
the input and will allow functionality such as comparison (is required).

[1] http://www.postgresql.org/docs/8.4/static/datatype-net-types.html

 If it's up to me, I'd like to wait for approval of 'core' part of this change
 (lets call it spike), and finish remaining 'marginalities' after it. (just
 to make myself clear proper db design ISN'T marginal measuring it using
 absolute scale, but it IS very marginal related to situation where most of
 code wasn't approved/reviewed yet).
 
 m.
 
 - Original Message -
 From: Yevgeny Zaspitsky yzasp...@redhat.com
 To: Martin Mucha mmu...@redhat.com
 Cc: de...@ovirt.org, users@ovirt.org
 Sent: Sunday, April 27, 2014 2:22:04 PM
 Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
 
 Now for users@ovirt.org indeed.
 
 - Original Message -
 From: Yevgeny Zaspitsky yzasp...@redhat.com
 To: Martin Mucha mmu...@redhat.com
 Cc: us...@ovrit.org, de...@ovirt.org
 Sent: Sunday, April 27, 2014 2:29:46 PM
 Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
 
 Martin,
 
 I'd like to propose a different approach on how the ranges to be defined and
 stored.
 
 Discussing this feature with Moti

Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

2014-04-28 Thread Martin Mucha
ad 1) mine thinking was the same. If it's optional, then upgrade process is: 
'you do not have to do anything', which seemed best to me.
ad 2) yes, this has to be reflected in gui. Currently in business layer there 
are checks which do not let you use multicast address (exception is thrown when 
there is such attempt -- this is appropriate from mac pool perspective). When 
user specified mac ranges containing multicast address, this mac address is 
present in pool (due to implementation restriction), but it is flagged as used, 
so system never assigns it. And if user tried to do it by hand, it will fail, 
like I said.

m.

- Original Message -
From: Genadi Chereshnya gcher...@redhat.com
To: Martin Mucha mmu...@redhat.com
Cc: Moti Asayag masa...@redhat.com, de...@ovirt.org, users@ovirt.org, 
Martin Pavlik mpav...@redhat.com
Sent: Monday, April 28, 2014 10:12:06 AM
Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

1) In our opinion the pool definition should be optional.
We should preserve the existing behavior. It will be useful especially for the 
upgrade scenarios.
 
2) As well for the Number of MACs we proposed earlier you should take into 
account the multicast addresses (if they are in the range) and to reduce them 
from the count of Number of MACs

Genadi

- Original Message -
From: Martin Mucha mmu...@redhat.com
To: Genadi Chereshnya gcher...@redhat.com
Cc: Moti Asayag masa...@redhat.com, de...@ovirt.org, users@ovirt.org, 
Martin Pavlik mpav...@redhat.com
Sent: Monday, April 28, 2014 9:59:06 AM
Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

Hi,

thanks for your input, I'll try to satisfy your request to be able to set range 
'width' either by 'end boundary' or 'mac count' in gui design.

Prior to that there are more fundamental decisions to be made -- like whether 
the pool definition is mandatory or optional, and how this influence the app 
for upgrading users. I'm pushing the idea of optional definition with one 
global pool as a fallback. And like I said in previous emails, from this point 
of view is gui design marginal, since we do not know what exact things should 
be displayed there(gui will be little bit different for optional pool 
definition). This is to be decided this week, after that we can discuss final 
design of gui.

m.

- Original Message -
From: Genadi Chereshnya gcher...@redhat.com
To: Moti Asayag masa...@redhat.com
Cc: de...@ovirt.org, users@ovirt.org, Martin Mucha mmu...@redhat.com, 
Martin Pavlik mpav...@redhat.com
Sent: Monday, April 28, 2014 8:47:11 AM
Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

Hi, 
We would like to propose a little bit better solution from user experience side.

We should have 3 fields for each range:
1) Start range
2) End range
3) Number of MACs
When you have to fill in End range or Number of MACs (when start range is 
mandatory).
And the 3rd field will be filled in automatically according to others.
For example:
1) If Start range is 00:00:00:00:00:01 and Number of MACs is 5 then End 
range should be filled in with 00:00:00:00:00:05.
2) If Start range is 00:00:00:00:00:01 and End range is 00:00:00:00:00:05, 
then Number of MACs should be filled in with 5. 

For update: End range and Number of MACs should be updated automatically as 
well, so if you update End range the Number of MACs should be updated and 
vice versa.

For adding several MAC pool ranges for DC we can use the + or - sign as we 
do for adding VNIC profile or Labels field.

Regards,
   Genadi








- Original Message -
From: Moti Asayag masa...@redhat.com
To: Martin Mucha mmu...@redhat.com
Cc: de...@ovirt.org, users@ovirt.org
Sent: Monday, April 28, 2014 9:21:50 AM
Subject: Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC



- Original Message -
 From: Martin Mucha mmu...@redhat.com
 To: Yevgeny Zaspitsky yzasp...@redhat.com
 Cc: users@ovirt.org, de...@ovirt.org
 Sent: Monday, April 28, 2014 9:14:38 AM
 Subject: Re: [ovirt-devel] Feature Page: Mac Pool per DC
 
 Hi,
 you're right, I do know about these problems. THIS IS DEFINITELY NOT A FINAL
 CODE.
 
 Why I did it this way: I come from agile environment.
 This supposed to be FIRST increment. Not last. I hate waterfall style of work
 -- almighty solution in one swing. I'd like to make sure, that main part,
 that core principle is valid and approved. Making gui look nice is marginal.
 So it is data structure for first increment. We can definitely think of
 thousands of improvements, BUT this RFC already include more than 10 patch
 sets and NO core reviews. How can I know, that others will approve this and
 I'm not completely wrong?
 
 about UX: it's wrong, but just fine for first increment. It can be used
 somehow and that just sufficient. Note: even with table to enter each
 from-to range there can be validation problem needed to be handled. Gui can
 changed to better one, when we know, that this feature works

Re: [ovirt-users] Feature Page: Mac Pool per DC

2014-04-24 Thread Martin Mucha
no. you don't change mac addresses on the fly.
ok, I was just asking if that's an option. No reallocating.

i don't see why you need to keep it in memory at all?
What I did is not a rewrite, but alteration of existing code -- I just add one 
layer above existing pool implementation. I'm not sure about that, that code 
existed before I start working on it; one explanation could be, that if 
duplicates are not allowed in config, we want to check user input and detect 
when he tries to add same mac address twice. Yes, *this* can be done using 
simple db query. I'll check that out, I'm not sufficiently aware of context to 
be able to say confident can be removed/must stay.

iiuc, you keep in memory the unused-ranges of the various mac_pools.
when a mac address is released, you need to check if it is in the range 
of the relevant mac_pool for the VM (default, dc, cluster, vm_pool).
if it is, you need to return it to that mac_pool. otherwise, the 
mac_pool is not relevant for this out-of-range mac address, and you just 
stop using it.

currently it works like this: you identify pool you want and got some(based on 
system config). You release (free) mac from this pool without any care what 
type of mac it is. Method returns 'true' if it was released (== count of it's 
usages reaches zero or was not used at all). I think it does what you want, 
maybe with little less client code involvement. If client code provided wrong 
pool identification or releasing not used mac then it's a coding error and all 
we can do is log it.

remember, you have to check the released mac address for the specific 
associated mac_pool, since we do (read: should[1]) allow overlapping mac 
addresses (hence ranges) in different mac_pool.

there's no free user specified mac address method. There's only freeMac 
method. So the flow is like this: you identify pool somehow. By nic, for which 
you're releasing mac, by datacenter id, you name it. Then you release mac using 
freeMac method. If it was used, it'll be released; if it was used multiple 
times, usage count is decreased. I do not see how is overlapping with another 
pools related to that. You identified pool, freed mac from it, other pools 
remain intact.

---
about cases you mentioned: 
I'll check whether those mac addresses, which were custom ones and after ranges 
alteration lies in the ranges of mac pool, those get marked as used in that 
pool. It should be true, but I rather write test for it.

M.

- Original Message -
From: Itamar Heim ih...@redhat.com
To: Martin Mucha mmu...@redhat.com
Cc: users@ovirt.org, de...@ovirt.org
Sent: Wednesday, April 23, 2014 10:32:33 PM
Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC

On 04/23/2014 11:12 AM, Martin Mucha wrote:
 Hi,

 I was describing current state, first iteration. Need of restart is something 
 which should not exist, I've removed that necessity meantime.
 Altered flow: You allocate mac address for nic in data center without own 
 pool, it gets registered in global pool. Then you modify settings of that 
 data center so that new pool is created for it. All NICs for that data center 
 is queries from DB, it's macs released from global pool and added to data 
 center scope pool. And other way around. When you delete this scoped pool, 
 all its content will be moved to global pool. Feature page is updated.

 Note: *previously* there was MAC placed in wrong pool only after modification 
 of existing data center, which caused entirely new pool to be created (there 
 wasn't pool for this scope, after modification there is). All other 
 operations were fine. Now all manipulation with scoped pools should be ok.

 Note2: all that scoped pool handling is implemented as strategy. If we are 
 unsatisfied with this implementation we could create another one and switch 
 to it without modifying 'calling' code. Also many implementation may coexist 
 and we can switch between them (on app start up) upon config.

 Question: When allocating MAC, not one specified by user, system picks 
 available mac from given mac pool. Imagine, that after some time then mac 
 pool ranges changes, and lets say that whole new interval of macs is used, 
 not overlapping with former one. Then all previously allocated macs will be 
 present in altered pool as a user specified ones -- since they are outside of 
 defined ranges. With large number of this mac address this have detrimental 
 effect on memory usage. So if this is a real scenario, it would be 
 acceptable(or welcomed) for you to reassign all mac address which were 
 selected by system? For example on engine start / vm start.

no. you don't change mac addresses on the fly.
also, if the mac address isn't in the range of the scope, i don't see 
why you need to keep it in memory at all?

iiuc, you keep in memory the unused-ranges of the various mac_pools.
when a mac address is released, you need to check if it is in the range 
of the relevant mac_pool for the VM (default, dc, cluster, vm_pool

Re: [ovirt-users] Feature Page: Mac Pool per DC

2014-04-24 Thread Martin Mucha
all you said is valid and code satisfies it, except for that free mac method. I 
cannot tell you the reasons behind this, but MacPollManager was designed to be 
able to work with MACs, but differentiate among them only inside. Client code 
cannot say neither release this mac as a user specified mac nor add this mac 
as user specified one. Only free this mac and add this mac. Responsibility 
of deciding is upon pool implementation. I do not think is a bad way, though.

M.

- Original Message -
From: Sven Kieske s.kie...@mittwald.de
To: users@ovirt.org
Sent: Thursday, April 24, 2014 1:05:27 PM
Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC



Am 24.04.2014 11:58, schrieb Martin Mucha:
 there's no free user specified mac address method. There's only freeMac 
 method.

There has to be something somewhere in the code.

What you can currently do in the released ovirt versions is:
assign a custom mac to a vm nic, which is not part of any configured
mac pool.

you can do this by hand through the gui (webadmin) or via REST /api call.

this must be possible in the future, for backwards compatibility.
and whats also a must: have the same mac pool ranges in different
DCs.

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Feature Page: Mac Pool per DC

2014-04-23 Thread Martin Mucha
Hi, 

I was describing current state, first iteration. Need of restart is something 
which should not exist, I've removed that necessity meantime.
Altered flow: You allocate mac address for nic in data center without own pool, 
it gets registered in global pool. Then you modify settings of that data center 
so that new pool is created for it. All NICs for that data center is queries 
from DB, it's macs released from global pool and added to data center scope 
pool. And other way around. When you delete this scoped pool, all its content 
will be moved to global pool. Feature page is updated.

Note: *previously* there was MAC placed in wrong pool only after modification 
of existing data center, which caused entirely new pool to be created (there 
wasn't pool for this scope, after modification there is). All other operations 
were fine. Now all manipulation with scoped pools should be ok.

Note2: all that scoped pool handling is implemented as strategy. If we are 
unsatisfied with this implementation we could create another one and switch to 
it without modifying 'calling' code. Also many implementation may coexist and 
we can switch between them (on app start up) upon config.

Question: When allocating MAC, not one specified by user, system picks 
available mac from given mac pool. Imagine, that after some time then mac pool 
ranges changes, and lets say that whole new interval of macs is used, not 
overlapping with former one. Then all previously allocated macs will be present 
in altered pool as a user specified ones -- since they are outside of defined 
ranges. With large number of this mac address this have detrimental effect on 
memory usage. So if this is a real scenario, it would be acceptable(or 
welcomed) for you to reassign all mac address which were selected by system? 
For example on engine start / vm start.

M.

- Original Message -
From: Itamar Heim ih...@redhat.com
To: Martin Mucha mmu...@redhat.com
Cc: users@ovirt.org, de...@ovirt.org
Sent: Tuesday, April 22, 2014 5:15:35 PM
Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC

On 04/18/2014 01:17 PM, Martin Mucha wrote:
 Hi,

 I'll try to describe it little bit more. Lets say, that we've got one data 
 center. It's not configured yet to have its own mac pool. So in system is 
 only one, global pool. We create few VMs and it's NICs will obtain its MAC 
 from this global pool, marking them as used. Next we alter data center 
 definition, so now it uses it's own mac pool. In system from this point on 
 exists two mac pools, one global and one related to this data center, but 
 those allocated MACs are still allocated in global pool, since new data 
 center creation does not (yet) contain logic to get all assigned MACs related 
 to this data center and reassign them in new pool. However, after app restart 
 all VmNics are read from db and placed to appropriate pools. Lets assume, 
 that we've performed such restart. Now we realized, that we actually don't 
 want that data center have own mac pool, so we alter it's definition removing 
 mac pool ranges. Pool related to this data center will be removed and it's 
 content will !
 be moved t
o a scope above this data center -- into global scope pool. We know, that 
everything what's allocated in pool to be removed is still used, but we need to 
track it elsewhere and currently there's just one option, global pool. So to 
answer your last question. When I remove scope, it's pool is gone and its 
content moved elsewhere. Next, when MAC is returned to the pool, the request 
goes like: give me pool for this virtual machine, and whatever pool it is, I'm 
returning this MAC to it. Clients of ScopedMacPoolManager do not know which 
pool they're talking to. Decision, which pool is right for them, is done behind 
the scenes upon their identification (I want pool for this logical network).

 Notice, that there is one problem in deciding which scope/pool to use. 
 There are places in code, which requires pool related to given data center, 
 identified by guid. For that request, only data center scope or something 
 broader like global scope can be returned. So even if one want to use one 
 pool per logical network, requests identified by data center id still can 
 return only data center scope or broader, and there are no chance returning 
 pool related to logical network (except for situation, where there is sole 
 logical network in that data center).

 Thanks for suggestion for another scopes. One question: if we're implementing 
 them, would you like just to pick a *sole* non-global scope you want to use 
 in your system (like data center related pools ONLY plus one global, or 
 logical network related pools ONLY plus one global) or would it be (more) 
 beneficial to you to have implemented some sort of cascading and overriding? 
 Like: this data center uses *this* pool, BUT except for *this* logical 
 network, which should use *this* one instead.

 I'll update feature page to contain

Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

2014-04-22 Thread Martin Mucha
Hi,

I like to answer questions. Presence of questions in motivated environment 
means that there is flaw in documentation/study material, which needs to be 
fixed :)

To answer your question. 
You got pool you want to use -- either global one (explicitly using method 
org.ovirt.engine.core.bll.network.macPoolManager.ScopedMacPoolManager#defaultScope())
 or related to some scope, which you identify somehow -- like in previous mail: 
give me pool for this data center. When you have this pool, you can allocate 
*some* new mac (system decides which one it will be) or you can allocate 
*explicit* one, use MAC address you've specified. I think that the latter is 
what you've meant by assigning by hand. There is just performance difference 
between these two allocation. Once the pool, which has to be used, is 
identified, everything which comes after it happens on *this* pool.

Example(I'm using naming from code here, storagePool is a db table for data 
center):
ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool().addMac(00:1a:4a:15:c0:fe);

Lets discuss parts from this command:

ScopedMacPoolManager.scopeFor() // means I want scope ...
ScopedMacPoolManager.scopeFor().storagePool(storagePoolId)   //... which is 
related to storagePool and identified by storagePoolID
ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool()//... 
and I want existing pool for this scope
ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool().addMac(00:1a:4a:15:c0:fe)
   //... and I want to add this mac address to it.

So in short, whatever you do with pool you get anyhow, happens on this pool 
only. You do not have code-control on what pool you get, like if system is 
configured to use single pool only, then request for datacenter-related pool 
still return that sole one, but once you have that pool, everything happen on 
this pool, and, unless datacenter configuration is altered, same request in 
future for pool should return same pool.

Now small spoiler(It's not merged to production branch yet) -- performance 
difference between allocating user provided MAC and MAC from mac pool range: 
You should try to avoid to allocate MAC which is outside of ranges of 
configured mac pool(either global or scoped one). It's perfectly OK, to 
allocate specific MAC address from inside these ranges, actually is little bit 
more efficient than letting system pick one for you. But if you use one from 
outside of those ranges, your allocated MAC end up in less memory efficient 
storage(approx 100 times less efficient). So if you want to use user-specified 
MACs, you can, but tell system from which range those MACs will be(via mac pool 
configuration).

M.

- Original Message -
From: Sven Kieske s.kie...@mittwald.de
To: Martin Mucha mmu...@redhat.com, Itamar Heim ih...@redhat.com
Cc: users@ovirt.org, de...@ovirt.org
Sent: Tuesday, April 22, 2014 8:31:31 AM
Subject: Re: [ovirt-devel] [ovirt-users] Feature Page: Mac Pool per DC

Hi,

thanks for the very detailed answers.

So here is another question:

How are MACs handled which got assigned by hand?
Do they also get registered with a global or with
the datacenter pool?
Are they tracked anyway?
I'm currently assigning macs via API directly
to the vms and do not let ovirt decide itself
which mac goes where.

Am 18.04.2014 12:17, schrieb Martin Mucha:
 Hi, 
 
 I'll try to describe it little bit more. Lets say, that we've got one data 
 center. It's not configured yet to have its own mac pool. So in system is 
 only one, global pool. We create few VMs and it's NICs will obtain its MAC 
 from this global pool, marking them as used. Next we alter data center 
 definition, so now it uses it's own mac pool. In system from this point on 
 exists two mac pools, one global and one related to this data center, but 
 those allocated MACs are still allocated in global pool, since new data 
 center creation does not (yet) contain logic to get all assigned MACs related 
 to this data center and reassign them in new pool. However, after app restart 
 all VmNics are read from db and placed to appropriate pools. Lets assume, 
 that we've performed such restart. Now we realized, that we actually don't 
 want that data center have own mac pool, so we alter it's definition removing 
 mac pool ranges. Pool related to this data center will be removed and it's 
 content will be 
  moved to a scope above this data center -- into global scope pool. We know, 
 that everything what's allocated in pool to be removed is still used, but we 
 need to track it elsewhere and currently there's just one option, global 
 pool. So to answer your last question. When I remove scope, it's pool is gone 
 and its content moved elsewhere. Next, when MAC is returned to the pool, the 
 request goes like: give me pool for this virtual machine, and whatever pool 
 it is, I'm returning this MAC to it. Clients of ScopedMacPoolManager do not 
 know which pool they're talking to. Decision, which pool

Re: [ovirt-users] [ovirt-devel] new feature

2014-04-18 Thread Martin Mucha
Hi,

sorry for the late answer.

Currently, yes. You can. 

Formerly the was sole pool for whole app with possibility to allow/disallow 
duplicates(config option is named ALLOW_DUPLICATES; one mac being used 
multiple times or not). So there was a way, how to reach situation, where same 
MACs are being used among multiple data centers at the same time. This 
implementation of mac pool remained the same, only there are potentially many 
of them, one per scope -- data center. So if you configure your data centers / 
global pool such that there is an overlap in mac intervals, one mac can be 
allocated multiple times even if you've specified in configuration that you 
disallow duplicates.

But I've already wrote some code detecting overlaps and fixing them in context 
of one mac pool. It can be easily refactored and used for whole 
ScopedMacPoolManager, so that if configured so, trying to add new 
scope/datacenter with specified mac pool ranges will fail if those mac ranges 
overlaps with any other existing pool definition. Would that be benefitial to 
you? And if answer is yes, can you describe a little bit your 
expectations/request? Whether you want to have a possibility to change this 
behavior in app configuration, whether is suficient to use for this 
configuration forementioned ALLOW_DUPLICATES or you'd like another one, ...


M.


- Original Message -
From: Sven Kieske s.kie...@mittwald.de
To: Martin Mucha mmu...@redhat.com, users@ovirt.org, de...@ovirt.org
Sent: Thursday, April 10, 2014 11:51:55 AM
Subject: Re: [ovirt-devel] new feature

I got a question regarding general
mac handling:

can you use the same macs on different
datacenters?

Am 10.04.2014 08:59, schrieb Martin Mucha:
 Hi,
 
 I'd like to notify you about new feature, which allows to specify distinct 
 MAC pools, currently one per data center.
 http://www.ovirt.org/Scoped_MacPoolManager
 
 any comments/proposals for improvement are very welcomed.
 Martin.


-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)

2014-04-18 Thread Martin Mucha
Hi, 

I'll try to describe it little bit more. Lets say, that we've got one data 
center. It's not configured yet to have its own mac pool. So in system is only 
one, global pool. We create few VMs and it's NICs will obtain its MAC from this 
global pool, marking them as used. Next we alter data center definition, so now 
it uses it's own mac pool. In system from this point on exists two mac pools, 
one global and one related to this data center, but those allocated MACs are 
still allocated in global pool, since new data center creation does not (yet) 
contain logic to get all assigned MACs related to this data center and reassign 
them in new pool. However, after app restart all VmNics are read from db and 
placed to appropriate pools. Lets assume, that we've performed such restart. 
Now we realized, that we actually don't want that data center have own mac 
pool, so we alter it's definition removing mac pool ranges. Pool related to 
this data center will be removed and it's content will be 
 moved to a scope above this data center -- into global scope pool. We know, 
that everything what's allocated in pool to be removed is still used, but we 
need to track it elsewhere and currently there's just one option, global pool. 
So to answer your last question. When I remove scope, it's pool is gone and its 
content moved elsewhere. Next, when MAC is returned to the pool, the request 
goes like: give me pool for this virtual machine, and whatever pool it is, I'm 
returning this MAC to it. Clients of ScopedMacPoolManager do not know which 
pool they're talking to. Decision, which pool is right for them, is done behind 
the scenes upon their identification (I want pool for this logical network).

Notice, that there is one problem in deciding which scope/pool to use. There 
are places in code, which requires pool related to given data center, 
identified by guid. For that request, only data center scope or something 
broader like global scope can be returned. So even if one want to use one pool 
per logical network, requests identified by data center id still can return 
only data center scope or broader, and there are no chance returning pool 
related to logical network (except for situation, where there is sole logical 
network in that data center).

Thanks for suggestion for another scopes. One question: if we're implementing 
them, would you like just to pick a *sole* non-global scope you want to use in 
your system (like data center related pools ONLY plus one global, or logical 
network related pools ONLY plus one global) or would it be (more) beneficial to 
you to have implemented some sort of cascading and overriding? Like: this data 
center uses *this* pool, BUT except for *this* logical network, which should 
use *this* one instead.

I'll update feature page to contain these paragraphs.

M.


- Original Message -
From: Itamar Heim ih...@redhat.com
To: Martin Mucha mmu...@redhat.com, users@ovirt.org, de...@ovirt.org
Sent: Thursday, April 10, 2014 9:04:37 AM
Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)

On 04/10/2014 09:59 AM, Martin Mucha wrote:
 Hi,

 I'd like to notify you about new feature, which allows to specify distinct 
 MAC pools, currently one per data center.
 http://www.ovirt.org/Scoped_MacPoolManager

 any comments/proposals for improvement are very welcomed.
 Martin.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



(changed title to reflect content)

 When specified mac ranges for given scope, where there wasn't any 
 definition previously, allocated MAC from default pool will not be moved to 
 scoped one until next engine restart. Other way, when removing scoped mac 
 pool definition, all MACs from this pool will be moved to default one.

cna you please elaborate on this one?

as for potential other scopes - i can think of cluster, vm pool and 
logical network as potential ones.

one more question - how do you know to return the mac address to the 
correct pool on delete?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] new feature

2014-04-10 Thread Martin Mucha
Hi,

I'd like to notify you about new feature, which allows to specify distinct MAC 
pools, currently one per data center.
http://www.ovirt.org/Scoped_MacPoolManager

any comments/proposals for improvement are very welcomed.
Martin.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users