Re: [ovirt-devel] changing engine domain name

2016-07-12 Thread David Jaša
On Ne, 2016-07-10 at 10:27 +0300, Yedidyah Bar David wrote:
> On Sat, Jul 9, 2016 at 2:35 AM, Paul Dyer  wrote:
> > Hi,
> >
> > back in 2015, with the first install of ovirt, I used a domain of
> > xxxportal.com.   Since the client has an xxxcentral.com wildcard
> > certificate, I added changed the hostname and domainname, and added the
> > cert/cacert to the apache webpage.
> >
> > The pki on ovirt and vdsm (host) both still have the original xxxportal.com
> > domain.   I am looking for a way to wipe away the old domain.
> >
> > Do I need to remove the host (not hosted engine), drop the
> > datacenter/cluster, and build from a clean db?
> 
> Basically yes. See also:
> 
> https://www.ovirt.org/documentation/how-to/networking/changing-engine-hostname/
> 
> If you have lots of data in your engine (hosts, VMs etc), you might manage to
> keep most of it by something like this, didn't try that:
> 
> 1. Shutdown all VMs and move all hosts to maintenance
> 2. Stop ovirt-engine service
> 3. mv /etc/pki/ovirt-engine /etc/pki/ovirt-engine-backup-before-recreation
> 4. yum reinstall ovirt-engine-backend, or copy back from above backup
> only these, without the files they hold (for directories), but keep
> owner/permissions:
> cacert.template.in  certs  cert.template.in  keys  openssl.conf
> private  requests
> 5. engine-setup
> It will notice pki is removed and recreate it for you
> You might need to change admin password because it's encrypted with engine's 
> key
> 6. Connect to web admin, and per host:
> 6.1. Right click -> Enroll Certificate
> 6.2. You might need Right-Click -> Reinstall
> 6.3. Activate
> 
> This should be enough, more-or-less. You might want, just in case,
> before step 6,
> to connect to all hosts and remove stuff under /etc/pki, but I didn't check
> what exactly.
> 
> Best,

I'm wondering if all of these is necessary. I didn't do exactly this, I
however added a second mod_ssl instance to the apache on a different
port (with different certificates) and 3.6 worked for me without any
other changes (on both ports). 4.0 did not work on different port as AAA
refused to authenticate user.

David

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] [ACTION REQUIRED] oVirt 4.0.1 RC2 build starting

2016-07-12 Thread Sandro Bonazzola
Fyi oVirt products maintainers,
An oVirt build for an official release is going to start right now.
If you're a maintainer for any of the projects included in oVirt
distribution and you have changes in your package ready to be released
please:
- bump version and release to be GA ready
- tag your release within git (implies a GitHub Release to be automatically
created)
- build your packages within jenkins / koji / copr / whatever
- verify all bugs on MODIFIED have target release and target milestone set.
- add your builds to releng-tools/releases/ovirt-4.0.1_rc2.conf within
releng-tools project

Thanks,
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Caching of data from the database done properly

2016-07-12 Thread Roman Mohr
Hi Yevgeny,

On Mon, Jul 11, 2016 at 7:59 PM, Yevgeny Zaspitsky  wrote:
>
>
> On Tue, Jul 5, 2016 at 7:14 AM, Roman Mohr  wrote:
>>
>> On Mon, Jul 4, 2016 at 11:58 PM, Roman Mohr  wrote:
>> > Hi Everyone,
>> >
>> > I wanted to discuss a practice which seems to be pretty common in the
>> > engine which I find very limiting, dangerous and for some things it
>> > can even be a blocker.
>> >
>> > There are several places in the engine where we are using maps as
>> > cache in singletons to avoid reloading data from the database. Two
>> > prominent ones are the QuotaManager[1] and the MacPoolPerCluster[2].
>> >
>> > While it looks tempting to just use a map as cache, add some locks
>> > around it and create an injectable singleton, this has some drawbacks:
>> >
>> > 1) We have an autoritative source for our data and it offers
>> > transactions to take care of inconsistencies or parallel updates.
>> > Doing all that in a service again duplicates this.
>> > 2) Caching on the service layer is definitely not a good idea. It can
>> > introduce unwanted side effects when someone invokes the DAOs
>> > directly.
>> > 3) The point is more about the question if a cache is really needed:
>> > Do I just want that cache because I find it convenient to do a
>> > #getMacPoolForCluster(Guid clusterId) in a loop instead of just
>> > loading it once before the loop, or do my usage requirements really
>> > force me to use a cache?
>> >
>> > If you really need a cache, consider the following:
>> >
>> > 1) Do the caching on the DAO layer. This guarantees the best
>> > consistency across the data.
>> > 2) Yes this means either locking in the DAOs or a transactional cache.
>> > But before you complain, think about what in [1] and [2] is done. We
>> > do exactly that there, so the complexity is already introduced anyway.
>> > 3) Since we are working with transactions, a custom cache should NEVER
>> > cache writes (really just talking about our use case here). This makes
>> > checks for existing IDs before adding an entity or similar checks
>> > unnecessary, don't duplicate constraint checks like in [2].
>> > 4) There should always be a way to disable the cache (even if it is
>> > just for testing).
>> > 5) If I can't convince you to move the cache to the DAO layer, still
>> > add a way to disable the cache.
>> >
>>
>> I forgot to mention one thing: There are of course cases where
>> something is loaded on startup. Mostly things which can have multiple
>> sources.
>> For instance for the application configuration itself it is pretty
>> common, or like in the scheduler the scheduling policies where some
>> are Java only,
>> some are coming from other sources. It is still good
>>
>> But for normal business entities accessing parts of it through
>> services and parts of it through services is not the best thing to do
>> (if constructiong the whole business entity out of multiple daos is
>> complex, Repositories can help, but the cache should still be in the
>> dao layer).
>
>
> I do not agree that the caching should be on the DAO layer - that might lead
> to getting an entity that is built of parts that are not coherent each with
> another if the different DAO caches are not in sync.

I can't agree here.
That is what transactions are for. A second layer cache normally
follows transactions. You have interceptors to detect rollbacks and
commits.
If you don't have JTA in place there is then normally a small window
where you can read stale data in different transactions (which is fine
in most cases). It does have nothing to do with where the cache is.

It is much easier to stay in sync since there is no way to by-bass the cache.

> I'd put the cache on the Repositories (non-existent currently) or a higher
> layer, just above the transaction boundaries, so the cache would contain
> service call results rather than raw data.

What does that mean above the Transaction boundaries?
Yes a second level cache is to have a cache across transaction
boundaries and you also have that when you place them in the DAO
layer.

You would further make it very hard to track weather you are allowed
to manipulate data through DAOs, Repositories or Services when you
don't place the basic cache inside the DAOs since you might always by
accident by-pass the cache.

For higher layer caches in singletons it is also almost a prerequisite
to have the basic cache in the DAO layer because you can then also
listen on cache changes for dependent entities inside the singleton
(all cache implementations I know have listeners) and invalidate or
update derived caches. This, in combination with the possibility to
disable the cache completely on all layers, makes the cache completely
transparent on every layer. Which makes it very easy to write sane
code when using all the different services, DAOs, Repositories, ... .

> Then the cache would prevent from
> the application accessing the DB connection pool for a connection.