Re: [ovirt-devel] python-paramiko availability

2016-07-09 Thread Yedidyah Bar David
On Fri, Jul 8, 2016 at 12:22 AM, Simone Tiraboschi  wrote:
> On Thu, Jul 7, 2016 at 10:59 PM, Paul Dyer  wrote:
>> Hi,
>>
>> I am trying to install the ovirt-4.0 engine to a fresh install of RHEL 7.2.
>> There is a dependency for python-paramiko, but it appears that rpm was
>> removed from RHEL between 6 and 7...
>>
>> The dependency appears when I install ovirt-engine:
>>
>> Error: Package: ovirt-engine-setup-base-4.0.0.6-1.el7.centos.noarch
>> (ovirt-4.0)
>>Requires: python-paramiko
>>
>> This doc notes the package removal:
>>
>> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Migration_Planning_Guide/sect-Red_Hat_Enterprise_Linux-Migration_Planning_Guide-Removed_Packages.html
>>
>> Does anyone know where to source this package?
>
> You can get it from EPEL7.
> http://koji.fedoraproject.org/koji/buildinfo?buildID=765436

The repos configured by ovirt-release40 should already include it, btw.
How did you install the rest of the packages?

Best,

>
>> Thanks,
>> Paul
>>
>>
>> --
>> Paul Dyer,
>> Mercury Consulting Group, RHCE
>> 504-302-8750
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel



-- 
Didi
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Caching of data from the database done properly

2016-07-09 Thread Roman Mohr
Hi Martin,

Great feedback. Thanks for the clarifications.

On Thu, Jul 7, 2016 at 3:25 PM, Martin Mucha  wrote:
> Hi,
>
> some of information in mail are not exactly true. Namely MacPoolPerCluster 
> *does not do caching*, it does not even have DB layer structures it could 
> cache. So how it works is: pool has configuration upon which it initialize 
> itself. After that, it looks into db for all used MACs, which currently 
> happens to be querying all MAC addresses of all VmNics. So this is 
> initialized from data in DB, but it does not cache them. Clients [of pool] 
> asks pool for MAC address, which is then used somewhere without pool 
> supervision. I don't want to question this design, and I'm not saying that it 
> wouldn't be possible to move it's logic to db layer, but once, long, long 
> time ago someone decided this to be done on bll, and so it is on bll layer.
>

had another look at the MacPoolPerCluster source. You are right. It is
caching some calculations and not database data. I agree that this
should not be in the dao layer or the database. Sorry for the wrong
accusations regarding the MacPoolPerCluster class.

> I understand, that these might come as a problem in arquillian testing, but 
> that's to be resolved, since, not all singletons are for caching. And even if 
> they are, testing framework should be able to cope with such common beans, we 
> shouldn't limit ourselves not to use singletons. Therefore, I wouldn't invest 
> into changing these 'caches', but into allowing more complex setups in our 
> testing. If it's not possible, then 'reset' method is second best solution — 
> we have to use write lock as suggested in cr, and then it should be fine.

For the context: [1]
Having singletons is fine from my perspective too. It is just about
caching data from the database. Spring offeres the @DirtiesContext (A
little bit nicer than with Arquillian where as far as I have seen you
would create different test cases and do new @Deployments). But I
prefer to reset these rare singletons explicitly in a base class for
every test. Otherwise it is always very hard to track down possible
side effects in the class because you did not set up a new context.

For me tests are first class citizens of an application, so having a
way to reinitialize singletons directly is what I prefer. When it is
about caching from the database it is normally not needed since you
just disable the database cache during the tests.

>
> About drawbacks:
> ad 1) yes, this comes as an extra problem, one has to deal with tx on his 
> own, that's true. This wasn't part of original solution, but it should be 
> fixed already.

As long as it is really just the last resort I am fine with it.

> ad 2) No. Caching done correctly is done closest to the consumer. I assume 
> you can similarly ruin hibernate l2 cache via accessing data through 
> different channel. But that's kinda common to all caches — if you bypass 
> them, you'll corrupt them. So do not bypass them, or in this case, use them 
> as they was designed. As it have been said, you ask pool for mac, or inform 
> it, that you're going to use your own, and then use it. It means, that it's 
> designed to actually bypass it on all writes. Therefore if someone writes a 
> code using MAC without prior notification to the pool about such action, it 
> would be a problem. To avoid this problem there has to be bigger refactor — 
> pool would have to persist MAC addresses somehow and not vmNicDao, or if 
> moved entirelly to db layer, there would have to be trigger on vmnic table or 
> something like that...

I missed the calculation part in the macPoolPerCluster, so this is ok,
most of my comments do not apply there now. Of course you can cache
wherever you have to to meet the requirements. Still the best thing
you can have is that the loaded entities, when cached in higher layers
get evicted too when you change something in the lower layers (e.g.
DAO). This is the normal expectation and what all the hibernate level2
cache is about. Since we don't use all these fancy caches which do all
the hard lifting for us it would be even more complex for us to do the
caching on higher layers right.

> ad 3) It was requested, to have at least tens of millions of macs in pool. 
> Forget about loop, initializing this structure for given clusterId is not 
> acceptable even once per request. Loading that structure is quite cheap 
> (now), but not that cheap to allow what you ask for. Moving whole stuff to db 
> layer would be probably beneficial (when it was originally implemented), but 
> it's worthless doing it now.
>

It is really just about caching from the database. Moving logic to the
dao layer or the database is definitely what I had in mind.

> About suggestions: neither of them applies to MacPoolPerCluster — point (3) 
> for example: since pool is not a simple cache of db structure, and does not 
> have corresponding data in db layer, it cannot cache writes and it