Re: [Openstack-operators] [User-committee] User Committee elections

2017-02-17 Thread Edgar Magana
Congratulations to our elected User Committee Members!

This is a huge achievement for the UC. We all together we are going to make 
huge impact in OpenStack. Looking forward to working with you Shamail and 
Melvin.

Chris, Yih Leong and Maish,

I want to thank you for being part of our efforts. We are all one team and I 
also look forward to working with you.

Edgar

From: Matt Jarvis 
Reply-To: "m...@mattjarvis.org.uk" 
Date: Friday, February 17, 2017 at 4:10 PM
To: user-committee , OpenStack Operators 

Subject: [User-committee] User Committee elections

Hi All

I'm very pleased to announce the results of the first User Committee elections, 
which closed at 21.59UTC on the 17th February 2017.

The two candidates elected to the User Committee are Melvin Hillsman and 
Shamail Tahir.

Congratulations to Melvin and Shamail, I know they will both do an excellent 
job in representing our community.

Thank you to the other candidates who participated, and to everyone who voted.

Full results of the poll can be found at 
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4ec37f2d06428110.

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [User-committee] User Committee elections

2017-02-17 Thread Shamail


> On Feb 17, 2017, at 7:39 PM, Melvin Hillsman  wrote:
> 
> Thank you to everyone who voted, the election committee, and the others who 
> ran. I consider it an honor and privilege and am looking forward to the 
> coming days.
+1, well said!
> 
> --
> Melvin Hillsman
> Ops Technical Lead
> OpenStack Innovation Center
> mrhills...@gmail.com
> phone: (210) 312-1267
> mobile: (210) 413-1659
> Learner | Ideation | Belief | Responsibility | Command
> http://osic.org
> 
>> On Feb 17, 2017, at 18:10, Matt Jarvis  wrote:
>> 
>> Hi All
>> 
>> I'm very pleased to announce the results of the first User Committee 
>> elections, which closed at 21.59UTC on the 17th February 2017.
>> 
>> The two candidates elected to the User Committee are Melvin Hillsman and 
>> Shamail Tahir. 
>> 
>> Congratulations to Melvin and Shamail, I know they will both do an excellent 
>> job in representing our community. 
>> 
>> Thank you to the other candidates who participated, and to everyone who 
>> voted. 
>> 
>> Full results of the poll can be found at 
>> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4ec37f2d06428110. 
>> 
>> Matt
>> 
>> ___
>> User-committee mailing list
>> user-commit...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [User-committee] User Committee elections

2017-02-17 Thread Lauren Sell
Congrats Shamail and Melvin! And many thanks to all of the candidates who 
stepped up to run; it was a very strong group!


Finally, thank you Matt and Matt for running the election :)





On February 17, 2017 6:11:18 PM Matt Jarvis  wrote:


Hi All

I'm very pleased to announce the results of the first User Committee
elections, which closed at 21.59UTC on the 17th February 2017.

The two candidates elected to the User Committee are Melvin Hillsman and
Shamail Tahir.

Congratulations to Melvin and Shamail, I know they will both do an
excellent job in representing our community.

Thank you to the other candidates who participated, and to everyone who
voted.

Full results of the poll can be found at
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4ec37f2d06428110.

Matt



--
___
User-committee mailing list
user-commit...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] User Committee elections

2017-02-17 Thread Matt Jarvis
Hi All

I'm very pleased to announce the results of the first User Committee
elections, which closed at 21.59UTC on the 17th February 2017.

The two candidates elected to the User Committee are Melvin Hillsman and
Shamail Tahir.

Congratulations to Melvin and Shamail, I know they will both do an
excellent job in representing our community.

Thank you to the other candidates who participated, and to everyone who
voted.

Full results of the poll can be found at
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4ec37f2d06428110.

Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [puppet][fuel][packstack][tripleo] puppet 3 end of life

2017-02-17 Thread Alex Schultz
Top posting this thread because we're entering the Pike cycle.  So as
we enter Pike, we are officially dropping support for Puppet 3.  We
managed to not introduce any puppet 4 only requirements for the puppet
OpenStack modules during the Ocata cycle.  The Ocata modules[0] are
officially the last cycle where puppet 3 is supported. Please be aware
that we will be removing the puppet 3 CI for all the modules for Pike
onward and are officially dropping puppet 3 support as it was EOL on
December 31, 2016.

Thanks,
-Alex

[0] 
https://docs.openstack.org/developer/puppet-openstack-guide/releases.html#releases-summary

On Fri, Nov 11, 2016 at 2:11 PM, Alex Schultz  wrote:
> On Thu, Nov 3, 2016 at 11:31 PM, Sam Morrison  wrote:
>>
>> On 4 Nov. 2016, at 1:33 pm, Emilien Macchi  wrote:
>>
>> On Thu, Nov 3, 2016 at 9:10 PM, Sam Morrison  wrote:
>>
>> Wow I didn’t realise puppet3 was being deprecated, is anyone actually using
>> puppet4?
>>
>> I would hope that the openstack puppet modules would support puppet3 for a
>> while still, at lest until the next ubuntu LTS is out else we would get to
>> the stage where the openstack  release supports Xenial but the corresponding
>> puppet module would not? (Xenial has puppet3)
>>
>>
>> I'm afraid we made a lot of communications around it but you might
>> have missed it, no problem.
>> I have 3 questions for you:
>> - for what reasons would you not upgrade puppet?
>>
>>
>> Because I’m a time poor operator with more important stuff to upgrade :-)
>> Upgrading puppet *could* be a big task and something we haven’t had time to
>> look into. Don’t follow along with puppetlabs so didn’t realise puppet3 was
>> being deprecated. Now that this has come to my attention we’ll look into it
>> for sure.
>>
>> - would it be possible for you to use puppetlabs packaging if you need
>> puppet4 on Xenial? (that's what upstream CI is using, and it works
>> quite well).
>>
>>
>> OK thats promising, good to know that the CI is using puppet4. It’s all my
>> other dodgy puppet code I’m worried about.
>>
>> - what version of the modules do you deploy? (and therefore what
>> version of OpenStack)
>>
>>
>> We’re using a mixture of newton/mitaka/liberty/kilo, sometimes the puppet
>> module version is newer than the openstack version too depending on where
>> we’re at in the upgrade process of the particular openstack project.
>>
>> I understand progress must go on, I am interested though in how many
>> operators use puppet4. We may be in the minority and then I’ll be quiet :-)
>>
>> Maybe it should be deprecated in one release and then dropped in the next?
>>
>
> So this has been talked about for a while and we have attempted to
> gauge the 3/4 over the last year or so.  Unfortunately with the
> upstream modules also dropping 3 support, we're kind of stuck
> following their lead. We recently got nailed when the puppetlabs-ntp
> module finally became puppet 3 incompatible and we had to finally pin
> to an older version.  That being said we can try and hold off any
> possible incompatibilities in our modules until either late in this
> cycle or maybe until the start of the next cycle.  We will have
> several milestone releases for Ocata that will still be puppet 3
> compatible (one being next week) so that might be an option as well.
> I understand the extra work this may cause which is why we're trying
> to give as much advanced notice as possible.  In the current forecast
> I don't see any work that will make our modules puppet 3 incompatible,
> but we're also at the mercy of the community at large.  We will
> definitely drop puppet 3 at the start of Pike if we manage to make it
> through Ocata without any required changes.  I think it'll be more
> evident early next year after the puppet 3 EOL finally hits.
>
> Thanks,
> -Alex
>
>>
>> Cheers,
>> Sam
>>
>>
>>
>>
>>
>>
>> My guess is that this would also be the case for RedHat and other distros
>> too.
>>
>>
>> Fedora is shipping Puppet 4 and we're going to do the same for Red Hat
>> and CentOS7.
>>
>> Thoughts?
>>
>>
>>
>> On 4 Nov. 2016, at 2:58 am, Alex Schultz  wrote:
>>
>> Hey everyone,
>>
>> Puppet 3 is reaching it's end of life at the end of this year[0].
>> Because of this we are planning on dropping official puppet 3 support
>> as part of the Ocata cycle.  While we currently are not planning on
>> doing any large scale conversion of code over to puppet 4 only syntax,
>> we may allow some minor things in that could break backwards
>> compatibility.  Based on feedback we've received, it seems that most
>> people who may still be using puppet 3 are using older (< Newton)
>> versions of the modules.  These modules will continue to be puppet 3.x
>> compatible but we're using Ocata as the version where Puppet 4 should
>> be the target version.
>>
>> If anyone has any concerns or issues around this, please let us know.
>>
>> Thanks,
>> -Alex
>>
>> [0] 

Re: [Openstack-operators] Openstack and Ceph

2017-02-17 Thread Alex Hübner
Are these nodes connected to a dedicated or a shared (in the sense there
are other workloads running) network switches? How fast (1G, 10G or faster)
are the interfaces? Also, how much RAM are you using? There's a rule of
thumb that says you should dedicate at least 1GB of RAM for each 1 TB of
raw disk space. How the clients are consuming the storage? Are they virtual
machines? Are you using iSCSI to connect those? Are these clients the same
ones you're testing against your regular SAN storage and are they
positioned in a similar fashion (ie: over a steady network channel)? What
Ceph version are you using?

Finally, replicas are normally faster than erasure coding, so you're good
on this. It's *never* a good idea to enable RAID cache, even when it
apparently improves IOPS (the magic of Ceph relies on the cluster, it's
network and the number of nodes, don't approach the nodes as if they where
isolate storage servers). Also, RAID0 should only be used as a last resort
for the cases the disk controller doesn't offer JBOD mode.

[]'s
Hubner

On Fri, Feb 17, 2017 at 7:19 AM, Vahric Muhtaryan 
wrote:

> Hello All ,
>
> First thanks for your answers . Looks like everybody is ceph lover :)
>
> I believe that you already made some tests and have some results because
> of until now we used traditional storages like IBM V7000 or XIV or Netapp
> or something we are very happy to get good iops and also provide same
> performance to all instances until now.
>
> We saw that each OSD eating a lot of cpu and when multiple client try to
> get same performance from ceph its looks like not possible , ceph is
> sharing all things with clients and we can not reach hardware raw iops
> capacity with ceph. For example each SSD can do 90K iops we have three on
> each node and have 6 nodes means we should get better results then what we
> have now !
>
> Could you pls share your hardware configs , iops test and advise our
> expectations correct or not ?
>
> We are using Kraken , almost all debug options are set 0/0 , we modified
> op_Tracker or some other ops based configs too !
>
> Our Hardware
>
> 6 x Node
> Each Node Have :
> 2 Socket Intel(R) Xeon(R) CPU E5-2630L v3 @ 1.80GHz each and total 16
> core and HT enabled
> 3 SSD + 12 HDD (SSDs are in journal mode 4 HDD to each SSD)
> Each disk configured Raid 0 (We did not see any performance different with
> JBOD mode of raid card because of that continued with raid 0 )
> Also raid card write back cache is used because its adding extra IOPS too
> !
>
> Our Test
>
> Its %100 random and write
> Ceph pool is configured 3 replica set. (we did not use 2 because at the
> failover time all system stacked and we couldn’t imagine great tunning
> about it because some of reading said that under high load OSDs can be down
> and up again we should care about this too ! )
>
> Test Command : fio --randrepeat=1 --ioengine=libaio --direct=1
> --gtod_reduce=1 --name=test --filename=test --bs=4k —iodepth=256 --size=1G
> --numjobs=8 --readwrite=randwrite —group_reporting
>
> Achieved IOPS : 35 K (Single Client)
> We tested up to 10 Clients which ceph fairly share this usage like almost
> 4K for each
>
> Thanks
> Regards
> Vahric Muhtaryan
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Openstack Ceph Backend and Performance Information Sharing

2017-02-17 Thread Warren Wang
@Vahric, FYI, if you use directio, instead of sync (like a database is is
default configured for), you will just be using the RBD cache. Look at the
latency on your numbers. It is lower than is possible for a packet to
traverse the network. You'll need to use sync=1 if you want to see what the
performance is like for sync writes. You can reduce it with higher CPU
frequencies (change the governor), c-state disable, better network, the
right NVMe for journal, and other stuff. In the end, we're happy to see
even 500-600 IOPS for sync writes with a numjobs=1, iodepth=1 (256 is
unreasonable).

@Luis, since this is an OpenStack list, I assume he is accessing it via
Cinder.

Warren

On Fri, Feb 17, 2017 at 7:11 AM, Luis Periquito  wrote:

> There is quite some information missing: how much RAM do the nodes
> have? What SSDs? What Kernel (there has been complaints of a
> performance regression on 4.4+).
>
> You also never state how you have configured the OSDs, their journals,
> filestore or bluestore, etc...
>
> You never specify how you're accessing the RBD device...
>
> For you to achieve high IOPS you need higher frequency CPUs. Also you
> have to remember that the scale-out architecture of ceph means the
> more nodes you add the better performance you'll have.
>
> On Thu, Feb 16, 2017 at 4:26 PM, Vahric Muhtaryan 
> wrote:
> > Hello All ,
> >
> > For a long time we are testing Ceph from Firefly to Kraken , tried to
> > optimise many things which are very very common I guess like test
> tcmalloc
> > version 2.1 , 2,4 , jemalloc , setting debugs 0/0 , op_tracker and such
> > others and I believe with out hardware we almost reach to end of the
> road.
> >
> > Some vendor tests mixed us a lot like samsung
> > http://www.samsung.com/semiconductor/support/tools-
> utilities/All-Flash-Array-Reference-Design/downloads/
> Samsung_NVMe_SSDs_and_Red_Hat_Ceph_Storage_CS_20160712.pdf
> > , DELL Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat …
> and
> > from intel
> > http://www.flashmemorysummit.com/English/Collaterals/
> Proceedings/2015/20150813_S303E_Zhang.pdf
> >
> > At the end using 3 replica (Actually most of vendors are testing with 2
> but
> > I believe that its very very wrong way to do because when some of failure
> > happen you should wait 300 sec which is configurable but from blogs we
> > understaood that sometimes OSDs can be down and up again because of that
> I
> > believe very important to set related number but we do not want instances
> > freeze )  with config below with 4K , random and fully write only .
> >
> > I red a lot about OSD and OSD process eating huge CPU , yes it is and we
> are
> > very well know that we couldn’t get total of iOPS capacity of each raw
> SSD
> > drives.
> >
> > My question is , can you pls share almost same or closer config or any
> > config test or production results ? Key is write, not %70 of read % 30
> write
> > or full read things …
> >
> > Hardware :
> >
> > 6 x Node
> > Each Node  Have :
> > 2 Socker CPU 1.8 GHZ each and total 16 core
> > 3 SSD + 12 HDD (SSDs are in journal mode 4 HDD to each SSD)
> > Raid Cards Configured Raid 0
> > We did not see any performance different with JBOD mode of raid card
> because
> > of that continued with raid 0
> > Also raid card write back cache is used because its adding extra IOPS
> too !
> >
> > Achieved IOPS : 35 K (Single Client)
> > We tested up to 10 Clients which ceph fairly share this usage like
> almost 4K
> > for each
> >
> > Test Command : fio --randrepeat=1 --ioengine=libaio --direct=1
> > --gtod_reduce=1 --name=test --filename=test --bs=4k —iodepth=256
> --size=1G
> > --numjobs=8 --readwrite=randwrite —group_reporting
> >
> >
> > Regards
> > Vahric Muhtaryan
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] pip problems with openstack-ansible deployment

2017-02-17 Thread Kris G. Lindgren
I don't run OSAD, however did you confirm that on your repo server that you can 
actually download the files via a curl/wget call, locally and remotely?  I see 
you show the files exist, but don't see anything confirming that the web server 
is actually serving them.  I have seen things under apache, at least, that 
prevent the web server from sending the correct info.  Default config files 
forcing a specific index page, selinux permissions preventing directories from 
being shown.

On Feb 17, 2017, at 1:34 AM, Danil Zhigalin (Europe) 
> 
wrote:



I noticed one error in my previous explanation. I am running Ubuntu 14.04 LTS, 
not 16.04.



Danil Zhigalin
Technical Consultant
Tel: +49 211 1717 1260
Mob: +49 174 151 8457
danil.zhiga...@dimensiondata.com

Derendorfer Allee 26, Düsseldorf, North Rhine-Westphalia, 40476, Germany.

For more information, please go to 
www.dimensiondata.com

Dimension Data Germany AG & Co.KG, Horexstraße 7, 61352 Bad Homburg
Sitz: Bad Homburg, Amtsgericht Bad Homburg, HRA 3207
Pers. Haftende Ges : Dimension Data Verwaltungs AG, Sitz Bad Homburg.
Amtsgericht Bad Homburg, HRB 6172
Vorstand: Roberto Del Corno
Vors. des Aufsichtsrats: Andrew Coulsen.


-Original Message-
From: Danil Zhigalin (Europe)
Sent: 17 February 2017 09:15
To: 
'openstack-operators@lists.openstack.org'
 
>
Subject: pip problems with openstack-ansible deployment

Hello everyone,

Context:
openstact-ansible: stable/newton
OS: ubuntu 16.04 LTS

I am having trouble completing my deployment due to pip errors.

I have a 2 node setup and one separate deployment node. One of the nodes I am 
using to host all controller, network and storage functions and another as a 
compute. Repo container with the server is also hosted on the controller node. 
I already ran into similar problems as Achi Hamza who already reported pip 
issue on the Thu Nov 17 08:34:14 UTC 2016 in this mailing list.

This is how my openstack_user_config.yml file looks like (as in Hamza's case 
internal and external addresses are the same):

global_overrides:
internal_lb_vip_address: 172.21.51.152
external_lb_vip_address: 172.21.51.152 <...>

The recommendation that he got from another users were to set:

openstack_service_publicuri_proto: http
openstack_external_ssl: false
haproxy_ssl: false

in /etc/openstack_deploy/user_vriables.yml

These recommendations helped in my case as well and I was able to advance 
further until I faced another pip issues in the same playbook.

My current problem is that neither of containers can install pip packages from 
the repository.

TASK [galera_client : Install pip packages] 
FAILED - RETRYING: TASK: galera_client : Install pip packages (5 retries left).
FAILED - RETRYING: TASK: galera_client : Install pip packages (4 retries left).
FAILED - RETRYING: TASK: galera_client : Install pip packages (3 retries left).
FAILED - RETRYING: TASK: galera_client : Install pip packages (2 retries left).
FAILED - RETRYING: TASK: galera_client : Install pip packages (1 retries left).
fatal: [control1_galera_container-434df170]: FAILED! => {"changed": false, 
"cmd": "/usr/local/bin/pip install -U --constraint 
http://172.21.51.152:8181/os-releases/14.0.7/requirements_absolute_requirements.txt
 MySQL-python", "failed": true, "msg": "stdout: Collecting mysql_python==1.2.5 
(from -c 
http://172.21.51.152:8181/os-releases/14.0.7/requirements_absolute_requirements.txt
 (line 81))\n\n:stderr: Could not find a version that satisfies the requirement 
mysql_python==1.2.5 (from -c 
http://172.21.51.152:8181/os-releases/14.0.7/requirements_absolute_requirements.txt
 (line 81)) (from versions: )\nNo matching distribution found for 
mysql_python==1.2.5 (from -c 
http://172.21.51.152:8181/os-releases/14.0.7/requirements_absolute_requirements.txt
 (line 81))\n"}

I already checked everything related to the HAproxy and tcpdumped on the repo 
side to see what requests are coming when pip install is called.

I found that there was a HTTP GET to the URL 
http://172.21.51.152:8181/os-releases/14.0.7/

I saw that it was forwarded by the proxy to the repo server and that repo 
server returned index.html from /var/www/repo/os-releases/14.0.7/

ls /var/www/repo/os-releases/14.0.7/ | grep index index.html
index.html.1
index.html.2

I also checked that MySQL-python is in the repo:

root@control1-repo-container-dad60ff0:~# ls /var/www/repo/os-releases/14.0.7/ | 
grep mysql_python mysql_python-1.2.5-cp27-cp27mu-linux_x86_64.whl

But for some reason pip can't figure out it is there.

I very much appreciate your help in solving this issue.

Best regards,
Danil


This email and all contents are subject to the following disclaimer:

Re: [Openstack-operators] Please give your opinion about "openstack server migrate" command.

2017-02-17 Thread David Medberry
Replying more to the "thread" and stream of thought than a specific message.

1) Yes, it is confusing. Rikimaru's description is more or less what I
believe.
2) Because it is confusing, I continue to use NovaClient commands instead
of OpenstackClient

I don't know what drove the creation of the OpenStack Client server
commands the way that they are it might be a good deep dive of launchpad
and git to find out. i.e., I can't "guess" what drove the design as it
seems wrong and overly opaque and complex.

On Fri, Feb 17, 2017 at 3:38 AM, Rikimaru Honjo <
honjo.rikim...@po.ntts.co.jp> wrote:

> Hi Marcus,
>
>
> On 2017/02/17 15:05, Marcus Furlong wrote:
>
>> On 17 February 2017 at 16:47, Rikimaru Honjo
>>  wrote:
>>
>>> Hi all,
>>>
>>> I found and reported a unkind behavior of "openstack server migrate"
>>> command
>>> when I maintained my environment.[1]
>>> But, I'm wondering which solution is better.
>>> Do you have opinions about following my solutions by operating point of
>>> view?
>>> I will commit a patch according to your opinions if those are gotten.
>>>
>>> [1]https://bugs.launchpad.net/python-openstackclient/+bug/1662755
>>> ---
>>> [Actual]
>>> If user run "openstack server migrate --block-migration ",
>>> openstack client call Cold migration API.
>>> "--block migration" option will be ignored if user don't specify
>>> "--live".
>>>
>>> But, IMO, this is unkindly.
>>> This cause unexpected operation for operator.
>>>
>>
>> +1 This has confused/annoyed me before.
>>
>>
>>> P.S.
>>> "--shared-migration" option has same issue.
>>>
>>
>> For the shared migration case, there is also this bug:
>>
>>https://bugs.launchpad.net/nova/+bug/1459782
>>
>> which, if fixed/implemented would negate the need for
>> --shared-migration? And would fix also "nova resize" on shared
>> storage.
>>
> In my understanding, that report says about libvirt driver's behavior.
> In the other hand, my report says about the logic of openstack client.
>
> Current "openstack server migrate" command has following logic:
>
> * openstack server migrate
>+-User don't specify "--live"
>| + Call cold-migrate API.
>|   Ignore "--block-migration" and "--shard-migration" option if user
> specify those.
>|
>+-User specify "--live"
>| + Call live-migration API.
>|
>+-User specify "--live --block-migration"
>| + Call block-live-migration API.
>|
>+-User specify "--live --shared-migration"
>  + Call live-migration API.[1]
>
> [1]
> "--shared-migration" means live-migration(not block-live-migrate) in
> "server migrate" command.
> In other words, "server migrate --live" and "server migrate --live
> --shared-migration"
> are same operation.
> I'm wondering why "--shared-migration" is existed...
>
>
> Cheers,
>> Marcus.
>>
>>
> --
> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
> NTTソフトウェア株式会社
> クラウド&セキュリティ事業部 第一事業ユニット(CS1BU)
> 本上力丸
> TEL.  :045-212-7539
> E-mail:honjo.rikim...@po.ntts.co.jp
> 〒220-0012
>   横浜市西区みなとみらい4丁目4番5号
>   横浜アイマークプレイス 13階
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Openstack Ceph Backend and Performance Information Sharing

2017-02-17 Thread Luis Periquito
There is quite some information missing: how much RAM do the nodes
have? What SSDs? What Kernel (there has been complaints of a
performance regression on 4.4+).

You also never state how you have configured the OSDs, their journals,
filestore or bluestore, etc...

You never specify how you're accessing the RBD device...

For you to achieve high IOPS you need higher frequency CPUs. Also you
have to remember that the scale-out architecture of ceph means the
more nodes you add the better performance you'll have.

On Thu, Feb 16, 2017 at 4:26 PM, Vahric Muhtaryan  wrote:
> Hello All ,
>
> For a long time we are testing Ceph from Firefly to Kraken , tried to
> optimise many things which are very very common I guess like test tcmalloc
> version 2.1 , 2,4 , jemalloc , setting debugs 0/0 , op_tracker and such
> others and I believe with out hardware we almost reach to end of the road.
>
> Some vendor tests mixed us a lot like samsung
> http://www.samsung.com/semiconductor/support/tools-utilities/All-Flash-Array-Reference-Design/downloads/Samsung_NVMe_SSDs_and_Red_Hat_Ceph_Storage_CS_20160712.pdf
> , DELL Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat … and
> from intel
> http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2015/20150813_S303E_Zhang.pdf
>
> At the end using 3 replica (Actually most of vendors are testing with 2 but
> I believe that its very very wrong way to do because when some of failure
> happen you should wait 300 sec which is configurable but from blogs we
> understaood that sometimes OSDs can be down and up again because of that I
> believe very important to set related number but we do not want instances
> freeze )  with config below with 4K , random and fully write only .
>
> I red a lot about OSD and OSD process eating huge CPU , yes it is and we are
> very well know that we couldn’t get total of iOPS capacity of each raw SSD
> drives.
>
> My question is , can you pls share almost same or closer config or any
> config test or production results ? Key is write, not %70 of read % 30 write
> or full read things …
>
> Hardware :
>
> 6 x Node
> Each Node  Have :
> 2 Socker CPU 1.8 GHZ each and total 16 core
> 3 SSD + 12 HDD (SSDs are in journal mode 4 HDD to each SSD)
> Raid Cards Configured Raid 0
> We did not see any performance different with JBOD mode of raid card because
> of that continued with raid 0
> Also raid card write back cache is used because its adding extra IOPS too !
>
> Achieved IOPS : 35 K (Single Client)
> We tested up to 10 Clients which ceph fairly share this usage like almost 4K
> for each
>
> Test Command : fio --randrepeat=1 --ioengine=libaio --direct=1
> --gtod_reduce=1 --name=test --filename=test --bs=4k —iodepth=256 --size=1G
> --numjobs=8 --readwrite=randwrite —group_reporting
>
>
> Regards
> Vahric Muhtaryan
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Please give your opinion about "openstack server migrate" command.

2017-02-17 Thread Rikimaru Honjo

Hi Marcus,

On 2017/02/17 15:05, Marcus Furlong wrote:

On 17 February 2017 at 16:47, Rikimaru Honjo
 wrote:

Hi all,

I found and reported a unkind behavior of "openstack server migrate" command
when I maintained my environment.[1]
But, I'm wondering which solution is better.
Do you have opinions about following my solutions by operating point of
view?
I will commit a patch according to your opinions if those are gotten.

[1]https://bugs.launchpad.net/python-openstackclient/+bug/1662755
---
[Actual]
If user run "openstack server migrate --block-migration ",
openstack client call Cold migration API.
"--block migration" option will be ignored if user don't specify "--live".

But, IMO, this is unkindly.
This cause unexpected operation for operator.


+1 This has confused/annoyed me before.



P.S.
"--shared-migration" option has same issue.


For the shared migration case, there is also this bug:

   https://bugs.launchpad.net/nova/+bug/1459782

which, if fixed/implemented would negate the need for
--shared-migration? And would fix also "nova resize" on shared
storage.

In my understanding, that report says about libvirt driver's behavior.
In the other hand, my report says about the logic of openstack client.

Current "openstack server migrate" command has following logic:

* openstack server migrate
   +-User don't specify "--live"
   | + Call cold-migrate API.
   |   Ignore "--block-migration" and "--shard-migration" option if user 
specify those.
   |
   +-User specify "--live"
   | + Call live-migration API.
   |
   +-User specify "--live --block-migration"
   | + Call block-live-migration API.
   |
   +-User specify "--live --shared-migration"
 + Call live-migration API.[1]

[1]
"--shared-migration" means live-migration(not block-live-migrate) in "server 
migrate" command.
In other words, "server migrate --live" and "server migrate --live 
--shared-migration"
are same operation.
I'm wondering why "--shared-migration" is existed...



Cheers,
Marcus.



--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
NTTソフトウェア株式会社
クラウド&セキュリティ事業部 第一事業ユニット(CS1BU)
本上力丸
TEL.  :045-212-7539
E-mail:honjo.rikim...@po.ntts.co.jp
〒220-0012
  横浜市西区みなとみらい4丁目4番5号
  横浜アイマークプレイス 13階



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] pip problems with openstack-ansible deployment

2017-02-17 Thread Danil Zhigalin (Europe)
   I noticed one error in my previous explanation. I am running Ubuntu 14.04 
LTS, not 16.04.



Danil Zhigalin
Technical Consultant
Tel: +49 211 1717 1260
Mob: +49 174 151 8457
danil.zhiga...@dimensiondata.com

Derendorfer Allee 26, Düsseldorf, North Rhine-Westphalia, 40476, Germany.

For more information, please go to www.dimensiondata.com

Dimension Data Germany AG & Co.KG, Horexstraße 7, 61352 Bad Homburg
Sitz: Bad Homburg, Amtsgericht Bad Homburg, HRA 3207
Pers. Haftende Ges : Dimension Data Verwaltungs AG, Sitz Bad Homburg.
Amtsgericht Bad Homburg, HRB 6172
Vorstand: Roberto Del Corno
Vors. des Aufsichtsrats: Andrew Coulsen.


-Original Message-
From: Danil Zhigalin (Europe)
Sent: 17 February 2017 09:15
To: 'openstack-operators@lists.openstack.org' 

Subject: pip problems with openstack-ansible deployment

Hello everyone,

Context:
openstact-ansible: stable/newton
OS: ubuntu 16.04 LTS

I am having trouble completing my deployment due to pip errors.

I have a 2 node setup and one separate deployment node. One of the nodes I am 
using to host all controller, network and storage functions and another as a 
compute. Repo container with the server is also hosted on the controller node. 
I already ran into similar problems as Achi Hamza who already reported pip 
issue on the Thu Nov 17 08:34:14 UTC 2016 in this mailing list.

This is how my openstack_user_config.yml file looks like (as in Hamza's case 
internal and external addresses are the same):

global_overrides:
  internal_lb_vip_address: 172.21.51.152
  external_lb_vip_address: 172.21.51.152 <...>

The recommendation that he got from another users were to set:

openstack_service_publicuri_proto: http
openstack_external_ssl: false
haproxy_ssl: false

in /etc/openstack_deploy/user_vriables.yml

These recommendations helped in my case as well and I was able to advance 
further until I faced another pip issues in the same playbook.

My current problem is that neither of containers can install pip packages from 
the repository.

TASK [galera_client : Install pip packages] 
FAILED - RETRYING: TASK: galera_client : Install pip packages (5 retries left).
FAILED - RETRYING: TASK: galera_client : Install pip packages (4 retries left).
FAILED - RETRYING: TASK: galera_client : Install pip packages (3 retries left).
FAILED - RETRYING: TASK: galera_client : Install pip packages (2 retries left).
FAILED - RETRYING: TASK: galera_client : Install pip packages (1 retries left).
fatal: [control1_galera_container-434df170]: FAILED! => {"changed": false, 
"cmd": "/usr/local/bin/pip install -U --constraint 
http://172.21.51.152:8181/os-releases/14.0.7/requirements_absolute_requirements.txt
  MySQL-python", "failed": true, "msg": "stdout: Collecting mysql_python==1.2.5 
(from -c 
http://172.21.51.152:8181/os-releases/14.0.7/requirements_absolute_requirements.txt
 (line 81))\n\n:stderr:   Could not find a version that satisfies the 
requirement mysql_python==1.2.5 (from -c 
http://172.21.51.152:8181/os-releases/14.0.7/requirements_absolute_requirements.txt
 (line 81)) (from versions: )\nNo matching distribution found for 
mysql_python==1.2.5 (from -c 
http://172.21.51.152:8181/os-releases/14.0.7/requirements_absolute_requirements.txt
 (line 81))\n"}

I already checked everything related to the HAproxy and tcpdumped on the repo 
side to see what requests are coming when pip install is called.

I found that there was a HTTP GET to the URL 
http://172.21.51.152:8181/os-releases/14.0.7/

I saw that it was forwarded by the proxy to the repo server and that repo 
server returned index.html from /var/www/repo/os-releases/14.0.7/

ls /var/www/repo/os-releases/14.0.7/ | grep index index.html
index.html.1
index.html.2

I also checked that MySQL-python is in the repo:

root@control1-repo-container-dad60ff0:~# ls /var/www/repo/os-releases/14.0.7/ | 
grep mysql_python mysql_python-1.2.5-cp27-cp27mu-linux_x86_64.whl

But for some reason pip can't figure out it is there.

I very much appreciate your help in solving this issue.

Best regards,
Danil

This email and all contents are subject to the following disclaimer:

"http://www.dimensiondata.com/emaildisclaimer;
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators