Re: [openstack-dev] [Large Deployments Team][Performance Team] New informal working group suggestion

2015-09-30 Thread Rogon, Kamil
Hello,

Thanks Dina for bringing up this great idea.



My team at Intel is working with Performance testing so far so we will be 
likely to be part of that project.

The performance aspect at large scale is an obstacle for enterprise 
deployments. For that reason Win The Enterprise 
  group may be also 
interested in this topic.



Regards,

Kamil Rogon



Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173
80-298 Gdansk



From: Dina Belova [mailto:dbel...@mirantis.com]
Sent: Wednesday, September 30, 2015 10:27 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Large Deployments Team][Performance Team] New 
informal working group suggestion



Sandeep,



sorry for the late response :) I'm hoping to define 'spheres of interest' and 
most painful moments using people's experience on Tokyo summit and we'll find 
out what needs to be tested most and can be actually done. You can share your 
ideas of what needs to be tested and focused on in 
 
https://etherpad.openstack.org/p/openstack-performance-issues etherpad, this 
will be a pool of ideas I'm going to use in Tokyo.



I can either create irc channel for the discussions or we can use 
#openstack-operators channel as LDT is using it for the communication. After 
Tokyo summit I'm planning to set Doodle voting for the time people will be 
comfortable with to have periodic meetings :)



Cheers,

Dina



On Fri, Sep 25, 2015 at 1:52 PM, Sandeep Raman  > wrote:

On Tue, Sep 22, 2015 at 6:27 PM, Dina Belova  > wrote:

Hey, OpenStackers!



I'm writing to propose to organise new informal team to work specifically on 
the OpenStack performance issues. This will be a sub team in already existing 
Large Deployments Team, and I suppose it will be a good idea to gather people 
interested in OpenStack performance in one room and identify what issues are 
worrying contributors, what can be done and share results of performance 
researches :)



Dina, I'm focused in performance and scale testing [no coding background].How 
can I contribute and what is the expectation from this informal team?



So please volunteer to take part in this initiative. I hope it will be many 
people interested and we'll be able to use cross-projects session slot 
  to meet in Tokyo and hold a 
kick-off meeting.



I'm not coming to Tokyo. How could I still be part of discussions if any? I 
also feel it is good to have a IRC channel for perf-scale discussion. Let me 
know your thoughts.



I would like to apologise I'm writing to two mailing lists at the same time, 
but I want to make sure that all possibly interested people will notice the 
email.



Thanks and see you in Tokyo :)



Cheers,

Dina



-- 

Best regards,

Dina Belova

Senior Software Engineer

Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Ceph software version in next releases

2015-05-07 Thread Rogon, Kamil
Hello,

 

I would like to ask what Ceph versions are scheduled for next releases.

 

I see the blueprint [1] for upgrading to next stable release (from Firefly
to Giant), but it is still in drafting state. 

That upgrade is important for Fuel 7.0 release, as this introduces a lot of
improvements in case of flash backends which is essential nowadays.

I think you should consider next Ceph version (Hammer) as this is marked as
LTS and should supersede 0.80.x Firefly. What's more this is already lunched
[2].

 

Regarding the upcoming 6.0.1 / 6.1 release I understand that you are stick
with Firefly branch. However will it be updated from 0.80.7 to 0.80.9? It
fixes some performance regression in librbd [3].

 

[1] https://blueprints.launchpad.net/fuel/+spec/upgrade-ceph

[2] http://ceph.com/docs/next/releases/

[3] http://docs.ceph.com/docs/next/release-notes/#v0-80-9-firefly

 

 

Regards,

Kamil Rogon



Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173
80-298 Gdansk



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel} Separating Ceph pools depending on storage type

2015-03-19 Thread Rogon, Kamil
Hello,

I want to initiate a discussion about different backend storage types for
Ceph. Now all types of drives (HDD, SAS, SSD) are treated the same way, so
the performance can vary widely.

It would be good to detect SSD drives and create separate Ceph pool for
them. From the user perspective, it should be able to select pool when
scheduling an instance (scenario for high-IOPS needed VM, like database).

Regards,
Kamil Rogon

---
Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173
80-298 Gdansk




smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev