Re: [Openstack-operators] Cloud Upgrade Strategies

2016-03-09 Thread Yuriy Brodskiy
Database only needed for control operations. During upgrade we disable API 
(mark down on LB or take them down). This will prevent users from making any 
database changes. 
After that flow is "simple"- backup db - do a migration- perform your 
validation tests
If all good, bring up your api, if not, restore db backup to rollback 
I'm over simplifying it here but this is basic concepts. You will find more 
details in the video 






On Wed, Mar 9, 2016 at 10:38 PM -0800, "Xav Paice"  wrote:












On 10 March 2016 at 19:26, Yuriy Brodskiy  wrote:
building a new cloud is not practical for real production environments. even if 
you can afford it, how do you migrate data?
We have been doing upgrades for a while now, and came up with few basic 
principles:1) you don't have to upgrade all at the same time. do it component 
at the time2) stand up a new version along side of an existing one, test it and 
then flip DNS
Take a look at presentation team did during Vancouver summit 
https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/10-minutes-openstack-upgrades-done

(replying to the list this time, and regretting using gmail)
I readily admit to not having watched that video (but will!) - one question.  
How do you deal with the db migration if you have two versions running at the 
same time?







___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Cloud Upgrade Strategies

2016-03-09 Thread Yuriy Brodskiy
 916-246-2072
>>>>
>>>> ___
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators@lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>
>>>>
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


-- 
yuriy brodskiy
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Keystone performance issue

2015-11-10 Thread Yuriy Brodskiy
Reza,

There was a good presentation on different token formats during Tokyo
summit. It may help answering some of the questions.
https://www.openstack.org/summit/tokyo-2015/videos/presentation/deep-dive-into-keystone-tokens-and-lessons-learned


On Sat, Nov 7, 2015 at 10:24 AM, Reza Bakhshayeshi 
wrote:

> Thanks all for your tips,
> I switched to Fernet token and average response time reduced to 6.8
> seconds.
> I think, as Clint said, I have to balance the load between multiple tinier
> keystone servers.
> What's your opinion?
>
> Dina,
> No, I just used Apache JMeter.
>
> Regards,
> Reza
>
> On 27 October 2015 at 04:33, Dina Belova  wrote:
>
>> Reza,
>>
>> afair the number of tokens that can be processed simultaneously by
>> Keystone in reality is equal to the number of Keystone workers (either
>> admin workers or public workers, depending on the user's nature). And this
>> number defaults to the number of CPUs. So that is kind of default
>> limitation, that may influence your testing.
>>
>> Btw did you evaluate Rally for the Keystone CRUD benchmarking?
>>
>> Cheers,
>> Dina
>>
>> On Tue, Oct 27, 2015 at 12:39 AM, Clint Byrum  wrote:
>>
>>> Excerpts from Reza Bakhshayeshi's message of 2015-10-27 05:11:28 +0900:
>>> > Hi all,
>>> >
>>> > I've installed OpenStack Kilo (with help of official document) on a
>>> > physical HP server with following specs:
>>> >
>>> > 2 Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz each 12 physical core
>>> (totally
>>> > 48 threads)
>>> > and 128 GB of Ram
>>> >
>>> > I'm going to benchmark keystone performance (with Apache JMeter) in
>>> order
>>> > to deploy OpenStack in production, but unfortunately I'm facing
>>> extremely
>>> > low performance.
>>> >
>>> > 1000 simultaneously token creation requests took around 45 seconds.
>>> (WOW!)
>>> > By using memcached in keystone.conf (following configuration) and
>>> threading
>>> > Keystone processes to 48, response time decreased to 18 seconds, which
>>> is
>>> > still too high.
>>> >
>>>
>>> I'd agree that 56 tokens per second isn't very high. However, it
>>> also isn't all that terrible given that keystone is meant to be load
>>> balanced, and so you can at least just throw more boxes at it without
>>> any complicated solution at all.
>>>
>>> Of course, that's assuming you're running with Fernet tokens. With UUID,
>>> which is the default if you haven't changed it, then you're pounding
>>> those
>>> tokens into the database, and that means you need to tune your database
>>> service quite a bit and provide high performance I/O (you didn't mention
>>> the I/O system).
>>>
>>> So, first thing I'd recommend is to switch to Liberty, as it has had some
>>> performance fixes for sure. But I'd also recommend evaluating the Fernet
>>> token provider. You will see much higher CPU usage on token validations,
>>> because the caching bonuses you get with UUID tokens aren't as mature in
>>> Fernet even in Liberty, but you should still see an overall scalability
>>> win by not needing to scale out your database server for heavy writes.
>>>
>>> > [cache]
>>> > enabled = True
>>> > config_prefix = cache.keystone
>>> > expiration_time = 300
>>> > backend = dogpile.cache.memcached
>>> > backend_argument = url:localhost:11211
>>> > use_key_mangler = True
>>> > debug_cache_backend = False
>>> >
>>> > I also increased Mariadb, "max_connections" and Apache allowed open
>>> files
>>> > to 4096, but they didn't help much (2 seconds!)
>>> >
>>> > Is it natural behavior? or we can optimize keystone performance more?
>>> > What are your suggestions?
>>>
>>> I'm pretty focused on doing exactly that right now, but we will need to
>>> establish some baselines and try to make sure we have tools to maintain
>>> the performance long-term.
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>>
>>
>> --
>>
>> Best regards,
>>
>> Dina Belova
>>
>> Senior Software Engineer
>>
>> Mirantis Inc.
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


-- 
yuriy brodskiy
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators