[openstack-dev] Looking to contribute

2015-11-02 Thread Lisa Jenkins
Hi new dev looking to contribute and am open to anything.  Which projects
need resources and have some low level bugs or something similar for a
newbie to get started?

Thanks,

Lisa
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [scheduler] select_destinations() behavior

2015-04-08 Thread Lisa Zangrando

Dear all,

just a question about the behavior of select_destinations() in Icehouse: 
this method has to select and consume resources or just select them 
without consuming?
I'm asking you because if I invoke such method for testing the resource 
availability and only when the result is OK I call run_instance(), the 
scheduler's filters see a wrong host state (e.g. wrong vcpus_used). This 
is because both methods use _schedule() which consumes resources 
(chosen_host.obj.consume_from_instance(instance_properties)).


Is this feature intentional?

Thanks in advance.
cheers,
Lisa



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] R: [blazar]: proposal for a new lease type

2014-11-03 Thread Lisa Zangrando
Hi Nikolay,
yes I'm in Paris this week. That's good!! See you in irc the next week.
Thanks a lot.
Cheers,
Lisa

- Messaggio originale -
Da: Nikolay Starodubtsev nstarodubt...@mirantis.com
Inviato: ‎02/‎11/‎2014 21.39
A: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Cc: ver  Marco Verlato marco.verl...@pd.infn.it; sga  Massimo 
Sgaravatto massimo.sgarava...@pd.infn.it
Oggetto: Re: [openstack-dev] [blazar]: proposal for a new lease type

Hi Lisa,
As far as I get the main idea of your blueprint it's something that we've 
planned to do in the future. So, yes this idea is something that Blazar 
should do. But the problem is that we have some real-time problems.
I mean we are in rename process (yeah, it takes real long time for us) and our 
devstack job is broken (I tried to fix it, but have some problems with time). 
If you want to discuss how we, I mean Blazar core-team, and you,
team from the Italian National Institute for Nuclear Physics, can collaborate 
we can discuss it in irc on blazar-channel, I think. We just need to appoint a 
time slot for that. May be week after summit? If I understand you'll be there 
this week.



  
Nikolay Starodubtsev

Software Engineer
Mirantis Inc.



Skype: dark_harlequine1



2014-10-31 19:19 GMT+04:00 Lisa lisa.zangra...@pd.infn.it:

Hi Nikolay,

many thanks.
Cheers,
Lisa


On 31/10/2014 14:10, Nikolay Starodubtsev wrote:

Hi Lisa, Sylvain,
I'll take a look at blueprint next week and will try to left some feedback 
about it.
Stay tuned.


  
Nikolay Starodubtsev

Software Engineer
Mirantis Inc.



Skype: dark_harlequine1



2014-10-31 16:14 GMT+04:00 Lisa lisa.zangra...@pd.infn.it:

Hi Sylvain,

thanks for your answer.
Actually we haven't yet developed that because we'd like to be sure that our 
proposal is fine with BLAZAR.
We already implemented a pluggable advanced scheduler for Nova which addresses 
the issues we are experiencing with OpenStack in the Italian National Institute 
for Nuclear Physics. This scheduler named FairShareScheduler is able to make 
OpenStack more efficient and flexible in terms of resource usage. Of course we 
wish to integrate our work in OpenStack and so we tried several times to start 
a discussion and a possible interaction with the OpenStack developers, but it 
seems to be so difficult to do it.
The GANTT people suggested us to refer to BLAZAR because it may have more 
affinity with our scope. Is it so? Therefore, I would appreciate to know if you 
may be interested in our proposal.

Thanks for your attention.
Cheers,
Lisa


  Such component's name is FairShareScheduler and 



On 31/10/2014 10:08, Sylvain Bauza wrote:



Le 31/10/2014 09:46, Lisa a écrit :

Dear Sylvain and BLAZAR team,

I'd like to receive your feedback on our blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and start a 
discussion in Paris the next week at the OpenStack Summit. Do you have a time 
slot for a very short meeting on this?
Thanks in advance.
Cheers,
Lisa



Hi Lisa,

At the moment, I'm quite occupied on Nova to split out the scheduler, so I 
can't hardly dedicate time for Blazar. That said, I would appreciate if you 
could propose some draft implementation attached to the blueprint, so I could 
glance at it and see what you aim to deliver.

Thanks,
-Sylvain


On 28/10/2014 12:07, Lisa wrote:

Dear Sylvain,

as you suggested me few weeks ago, I created the blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and I'd like 
to start a discussion.
I will be in Paris the next week at the OpenStack Summit, so it would be nice 
to talk with you and the BLAZAR team about my proposal in person.
What do you think?

thanks in advance,
Cheers,
Lisa


On 18/09/2014 16:00, Sylvain Bauza wrote:



Le 18/09/2014 15:27, Lisa a écrit :

Hi all, 

my name is Lisa Zangrando and I work at the Italian National Institute for 
Nuclear Physics (INFN). In particular I am leading a team which is addressing 
the issue concerning the efficiency in the resource usage in OpenStack. 
Currently OpenStack allows just a static partitioning model where the resource 
allocation to the user teams (i.e. the projects) can be done only by 
considering fixed quotas which cannot be exceeded even if there are unused 
resources (but) assigned to different projects. 
We studied the available BLAZAR's documentation and, in agreement with Tim Bell 
(who is responsible the OpenStack cloud project at CERN), we think this issue 
could be addressed within your framework. 
Please find attached a document that describes our use cases (actually we think 
that many other environments have to deal with the same problems) and how they 
could be managed in BLAZAR, by defining a new lease type (i.e. fairShare lease) 
to be considered as extension of the list of the already supported lease types. 
I would then be happy to discuss these ideas

Re: [openstack-dev] [blazar]: proposal for a new lease type

2014-10-31 Thread Lisa

Dear Sylvain and BLAZAR team,

I'd like to receive your feedback on our blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and 
start a discussion in Paris the next week at the OpenStack Summit. Do 
you have a time slot for a very short meeting on this?

Thanks in advance.
Cheers,
Lisa


On 28/10/2014 12:07, Lisa wrote:

Dear Sylvain,

as you suggested me few weeks ago, I created the blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and 
I'd like to start a discussion.
I will be in Paris the next week at the OpenStack Summit, so it would 
be nice to talk with you and the BLAZAR team about my proposal in person.

What do you think?

thanks in advance,
Cheers,
Lisa


On 18/09/2014 16:00, Sylvain Bauza wrote:


Le 18/09/2014 15:27, Lisa a écrit :

Hi all,

my name is Lisa Zangrando and I work at the Italian National 
Institute for Nuclear Physics (INFN). In particular I am leading a 
team which is addressing the issue concerning the efficiency in the 
resource usage in OpenStack.
Currently OpenStack allows just a static partitioning model where 
the resource allocation to the user teams (i.e. the projects) can be 
done only by considering fixed quotas which cannot be exceeded even 
if there are unused resources (but) assigned to different projects.
We studied the available BLAZAR's documentation and, in agreement 
with Tim Bell (who is responsible the OpenStack cloud project at 
CERN), we think this issue could be addressed within your framework.
Please find attached a document that describes our use cases 
(actually we think that many other environments have to deal with 
the same problems) and how they could be managed in BLAZAR, by 
defining a new lease type (i.e. fairShare lease) to be considered as 
extension of the list of the already supported lease types.

I would then be happy to discuss these ideas with you.

Thanks in advance,
Lisa



Hi Lisa,

Glad to see you're interested in Blazar.

I tried to go thru your proposal, but could you please post the main 
concepts of what you plan to add into an etherpad and create a 
blueprint [1] mapped to it so we could discuss on the implementation ?
Of course, don't hesitate to ping me or the blazar community in 
#openstack-blazar if you need help with the process or the current 
Blazar design.


Thanks,
-Sylvain

[1] https://blueprints.launchpad.net/blazar/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [blazar]: proposal for a new lease type

2014-10-31 Thread Lisa

Hi Sylvain,

thanks for your answer.
Actually we haven't yet developed that because we'd like to be sure that 
our proposal is fine with BLAZAR.
We already implemented a pluggable advanced scheduler for Nova which 
addresses the issues we are experiencing with OpenStack in the Italian 
National Institute for Nuclear Physics. This scheduler named 
FairShareScheduler is able to make OpenStack more efficient and flexible 
in terms of resource usage. Of course we wish to integrate our work in 
OpenStack and so we tried several times to start a discussion and a 
possible interaction with the OpenStack developers, but it seems to be 
so difficult to do it.
The GANTT people suggested us to refer to BLAZAR because it may have 
more affinity with our scope. Is it so? Therefore, I would appreciate to 
know if you may be interested in our proposal.


Thanks for your attention.
Cheers,
Lisa


  Such component's name is FairShareScheduler and


On 31/10/2014 10:08, Sylvain Bauza wrote:


Le 31/10/2014 09:46, Lisa a écrit :

Dear Sylvain and BLAZAR team,

I'd like to receive your feedback on our blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and 
start a discussion in Paris the next week at the OpenStack Summit. Do 
you have a time slot for a very short meeting on this?

Thanks in advance.
Cheers,
Lisa



Hi Lisa,

At the moment, I'm quite occupied on Nova to split out the scheduler, 
so I can't hardly dedicate time for Blazar. That said, I would 
appreciate if you could propose some draft implementation attached to 
the blueprint, so I could glance at it and see what you aim to deliver.


Thanks,
-Sylvain


On 28/10/2014 12:07, Lisa wrote:

Dear Sylvain,

as you suggested me few weeks ago, I created the blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and 
I'd like to start a discussion.
I will be in Paris the next week at the OpenStack Summit, so it 
would be nice to talk with you and the BLAZAR team about my proposal 
in person.

What do you think?

thanks in advance,
Cheers,
Lisa


On 18/09/2014 16:00, Sylvain Bauza wrote:


Le 18/09/2014 15:27, Lisa a écrit :

Hi all,

my name is Lisa Zangrando and I work at the Italian National 
Institute for Nuclear Physics (INFN). In particular I am leading a 
team which is addressing the issue concerning the efficiency in 
the resource usage in OpenStack.
Currently OpenStack allows just a static partitioning model where 
the resource allocation to the user teams (i.e. the projects) can 
be done only by considering fixed quotas which cannot be exceeded 
even if there are unused resources (but) assigned to different 
projects.
We studied the available BLAZAR's documentation and, in agreement 
with Tim Bell (who is responsible the OpenStack cloud project at 
CERN), we think this issue could be addressed within your framework.
Please find attached a document that describes our use cases 
(actually we think that many other environments have to deal with 
the same problems) and how they could be managed in BLAZAR, by 
defining a new lease type (i.e. fairShare lease) to be considered 
as extension of the list of the already supported lease types.

I would then be happy to discuss these ideas with you.

Thanks in advance,
Lisa



Hi Lisa,

Glad to see you're interested in Blazar.

I tried to go thru your proposal, but could you please post the 
main concepts of what you plan to add into an etherpad and create a 
blueprint [1] mapped to it so we could discuss on the implementation ?
Of course, don't hesitate to ping me or the blazar community in 
#openstack-blazar if you need help with the process or the current 
Blazar design.


Thanks,
-Sylvain

[1] https://blueprints.launchpad.net/blazar/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [blazar]: proposal for a new lease type

2014-10-31 Thread Lisa

Hi Nikolay,

many thanks.
Cheers,
Lisa

On 31/10/2014 14:10, Nikolay Starodubtsev wrote:

Hi Lisa, Sylvain,
I'll take a look at blueprint next week and will try to left some 
feedback about it.

Stay tuned.

*__*

Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1


2014-10-31 16:14 GMT+04:00 Lisa lisa.zangra...@pd.infn.it 
mailto:lisa.zangra...@pd.infn.it:


Hi Sylvain,

thanks for your answer.
Actually we haven't yet developed that because we'd like to be
sure that our proposal is fine with BLAZAR.
We already implemented a pluggable advanced scheduler for Nova
which addresses the issues we are experiencing with OpenStack in
the Italian National Institute for Nuclear Physics. This scheduler
named FairShareScheduler is able to make OpenStack more efficient
and flexible in terms of resource usage. Of course we wish to
integrate our work in OpenStack and so we tried several times to
start a discussion and a possible interaction with the OpenStack
developers, but it seems to be so difficult to do it.
The GANTT people suggested us to refer to BLAZAR because it may
have more affinity with our scope. Is it so? Therefore, I would
appreciate to know if you may be interested in our proposal.

Thanks for your attention.
Cheers,
Lisa


  Such component's name is FairShareScheduler and


On 31/10/2014 10:08, Sylvain Bauza wrote:


Le 31/10/2014 09:46, Lisa a écrit :

Dear Sylvain and BLAZAR team,

I'd like to receive your feedback on our blueprint
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease)
and start a discussion in Paris the next week at the OpenStack
Summit. Do you have a time slot for a very short meeting on this?
Thanks in advance.
Cheers,
Lisa



Hi Lisa,

At the moment, I'm quite occupied on Nova to split out the
scheduler, so I can't hardly dedicate time for Blazar. That said,
I would appreciate if you could propose some draft implementation
attached to the blueprint, so I could glance at it and see what
you aim to deliver.

Thanks,
-Sylvain


On 28/10/2014 12:07, Lisa wrote:

Dear Sylvain,

as you suggested me few weeks ago, I created the blueprint
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and
I'd like to start a discussion.
I will be in Paris the next week at the OpenStack Summit, so it
would be nice to talk with you and the BLAZAR team about my
proposal in person.
What do you think?

thanks in advance,
Cheers,
Lisa


On 18/09/2014 16:00, Sylvain Bauza wrote:


Le 18/09/2014 15:27, Lisa a écrit :

Hi all,

my name is Lisa Zangrando and I work at the Italian National
Institute for Nuclear Physics (INFN). In particular I am
leading a team which is addressing the issue concerning the
efficiency in the resource usage in OpenStack.
Currently OpenStack allows just a static partitioning model
where the resource allocation to the user teams (i.e. the
projects) can be done only by considering fixed quotas which
cannot be exceeded even if there are unused resources (but)
assigned to different projects.
We studied the available BLAZAR's documentation and, in
agreement with Tim Bell (who is responsible the OpenStack
cloud project at CERN), we think this issue could be
addressed within your framework.
Please find attached a document that describes our use cases
(actually we think that many other environments have to deal
with the same problems) and how they could be managed in
BLAZAR, by defining a new lease type (i.e. fairShare lease)
to be considered as extension of the list of the already
supported lease types.
I would then be happy to discuss these ideas with you.

Thanks in advance,
Lisa



Hi Lisa,

Glad to see you're interested in Blazar.

I tried to go thru your proposal, but could you please post
the main concepts of what you plan to add into an etherpad and
create a blueprint [1] mapped to it so we could discuss on the
implementation ?
Of course, don't hesitate to ping me or the blazar community
in #openstack-blazar if you need help with the process or the
current Blazar design.

Thanks,
-Sylvain

[1] https://blueprints.launchpad.net/blazar/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list

Re: [openstack-dev] [blazar]: proposal for a new lease type

2014-10-28 Thread Lisa

Dear Sylvain,

as you suggested me few weeks ago, I created the blueprint 
(https://blueprints.launchpad.net/blazar/+spec/fair-share-lease) and I'd 
like to start a discussion.
I will be in Paris the next week at the OpenStack Summit, so it would be 
nice to talk with you and the BLAZAR team about my proposal in person.

What do you think?

thanks in advance,
Cheers,
Lisa


On 18/09/2014 16:00, Sylvain Bauza wrote:


Le 18/09/2014 15:27, Lisa a écrit :

Hi all,

my name is Lisa Zangrando and I work at the Italian National 
Institute for Nuclear Physics (INFN). In particular I am leading a 
team which is addressing the issue concerning the efficiency in the 
resource usage in OpenStack.
Currently OpenStack allows just a static partitioning model where the 
resource allocation to the user teams (i.e. the projects) can be done 
only by considering fixed quotas which cannot be exceeded even if 
there are unused resources (but) assigned to different projects.
We studied the available BLAZAR's documentation and, in agreement 
with Tim Bell (who is responsible the OpenStack cloud project at 
CERN), we think this issue could be addressed within your framework.
Please find attached a document that describes our use cases 
(actually we think that many other environments have to deal with the 
same problems) and how they could be managed in BLAZAR, by defining a 
new lease type (i.e. fairShare lease) to be considered as extension 
of the list of the already supported lease types.

I would then be happy to discuss these ideas with you.

Thanks in advance,
Lisa



Hi Lisa,

Glad to see you're interested in Blazar.

I tried to go thru your proposal, but could you please post the main 
concepts of what you plan to add into an etherpad and create a 
blueprint [1] mapped to it so we could discuss on the implementation ?
Of course, don't hesitate to ping me or the blazar community in 
#openstack-blazar if you need help with the process or the current 
Blazar design.


Thanks,
-Sylvain

[1] https://blueprints.launchpad.net/blazar/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [blazar]: proposal for a new lease type

2014-09-18 Thread Lisa

Hi all,

my name is Lisa Zangrando and I work at the Italian National Institute 
for Nuclear Physics (INFN). In particular I am leading a team which is 
addressing the issue concerning the efficiency in the resource usage in 
OpenStack.
Currently OpenStack allows just a static partitioning model where the 
resource allocation to the user teams (i.e. the projects) can be done 
only by considering fixed quotas which cannot be exceeded even if there 
are unused resources (but) assigned to different projects.
We studied the available BLAZAR's documentation and, in agreement with 
Tim Bell (who is responsible the OpenStack cloud project at CERN), we 
think this issue could be addressed within your framework.
Please find attached a document that describes our use cases (actually 
we think that many other environments have to deal with the same 
problems) and how they could be managed in BLAZAR, by defining a new 
lease type (i.e. fairShare lease) to be considered as extension of the 
list of the already supported lease types.

I would then be happy to discuss these ideas with you.

Thanks in advance,
Lisa



Use cases for a new lease type in BLAZAR.pdf
Description: Adobe PDF document
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [blazar]: proposal for a new lease type

2014-09-18 Thread Lisa

Hi Sylvain,

thanks a lot for your answer.

I'd like to extend (if possible) the BLAZAR's list of the supported 
lease types by adding a new one which covers a specific use case which, 
it seems, it is not yet supported by BLAZAR. You know, In OpenStack 
every project has a granted fixed quota which cannot be extended 
dynamically. This implies that a project cannot allocate unused 
resources assigned to different projects. The idea is to lease such 
unused resources just for a limited duration time (e.g. from few hours 
to 1 day). Suppose the project A has consumed all own assigned 
resources while the project B, in the current time, has several 
available resources. B may offer its own resources to A just for a 
limited and well known time duration (i.e. not forever) so that in the 
future it can claim, if needed, the available resources belonging to A 
(or other projects).
This is a fair-share mechanism which guarantees the resources usage is 
equally distributed among users and projects by considering the portion 
of the resources allocated to them (i.e. share) and the resources 
already consumed. Please remark the share is a different concept than 
quota in the cloud terminology. You can consider this proposal as a 
way to update dynamically the quotas by considering the historical usage 
of each project. This approach should make OpenStack very efficient in 
terms of resource utilization.


About the blueprint I will try to create a new one.

Thanks again.
Cheers,
Lisa

On 18/09/2014 16:00, Sylvain Bauza wrote:


Le 18/09/2014 15:27, Lisa a écrit :

Hi all,

my name is Lisa Zangrando and I work at the Italian National 
Institute for Nuclear Physics (INFN). In particular I am leading a 
team which is addressing the issue concerning the efficiency in the 
resource usage in OpenStack.
Currently OpenStack allows just a static partitioning model where the 
resource allocation to the user teams (i.e. the projects) can be done 
only by considering fixed quotas which cannot be exceeded even if 
there are unused resources (but) assigned to different projects.
We studied the available BLAZAR's documentation and, in agreement 
with Tim Bell (who is responsible the OpenStack cloud project at 
CERN), we think this issue could be addressed within your framework.
Please find attached a document that describes our use cases 
(actually we think that many other environments have to deal with the 
same problems) and how they could be managed in BLAZAR, by defining a 
new lease type (i.e. fairShare lease) to be considered as extension 
of the list of the already supported lease types.

I would then be happy to discuss these ideas with you.

Thanks in advance,
Lisa



Hi Lisa,

Glad to see you're interested in Blazar.

I tried to go thru your proposal, but could you please post the main 
concepts of what you plan to add into an etherpad and create a 
blueprint [1] mapped to it so we could discuss on the implementation ?
Of course, don't hesitate to ping me or the blazar community in 
#openstack-blazar if you need help with the process or the current 
Blazar design.


Thanks,
-Sylvain

[1] https://blueprints.launchpad.net/blazar/




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Nominating Nathan Reller for barbican-core

2014-07-14 Thread Lisa Clark
+1

-Lisa

From: Douglas Mendizabal 
douglas.mendiza...@rackspace.commailto:douglas.mendiza...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, July 10, 2014 12:11 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Nathan Reller rellerrel...@yahoo.commailto:rellerrel...@yahoo.com
Subject: [openstack-dev] [barbican] Nominating Nathan Reller for barbican-core

Hi Everyone,

I would also like to nominate Nathan Reller for the barbican-core team.

Nathan has been involved with the Key Management effort since early 2013.  
Recently, Nate has been driving the development of a KMIP backend for Barbican, 
which will enable Barbican to be used with KMIP devices.  Nate’s input to the 
design of the plug-in mechanisms in Barbican has been extremely helpful, as 
well as his feedback in CR reviews.

As a reminder to barbican-core members, we use the voting process outlined in 
https://wiki.openstack.org/wiki/Barbican/CoreTeam to add members to our team.

Thanks,
Doug


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5 0CC9 AD14 1F30 2D58 923C
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Nominating Ade Lee for barbican-core

2014-07-14 Thread Lisa Clark
+1

-Lisa

From: Douglas Mendizabal 
douglas.mendiza...@rackspace.commailto:douglas.mendiza...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, July 10, 2014 11:55 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
a...@redhat.commailto:a...@redhat.com 
a...@redhat.commailto:a...@redhat.com
Subject: [openstack-dev] [barbican] Nominating Ade Lee for barbican-core

Hi Everyone,

I would like to nominate Ade Lee for the barbican-core team.

Ade has been involved in the development of Barbican since January of this 
year, and he’s been driving the work to enable DogTag to be used as a back end 
for Barbican.  Ade’s input to the design of barbican has been invaluable, and 
his reviews are always helpful, which has earned him the respect of the 
existing barbican-core team.

As a reminder to barbican-core members, we use the voting process outlined in 
https://wiki.openstack.org/wiki/Barbican/CoreTeam to add members to our team.

Thanks,
Doug


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5 0CC9 AD14 1F30 2D58 923C
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-07-10 Thread Lisa

Hi Joe,

we know the Amazon's hot market is based on a economic model. Suppose to assign 
to each group a virtual money amount for renting hot instances.
So, the money may reflect the share concept in our model. This means that a 
big money amount may rent potentially more resources (actually it depends by the current 
resource price, the offer, etc).
As in the real World, this economic model emphasizes the difference between 
rich and poor. So a rich group may rent all resources available for long time.
For sure this approach maximizes the resource utilization but it is not so 
fair. It should guarantees that the usage of the resources is fairly 
distributed among users and teams according to their mean number of VMs they 
have got by considering the portion of the resources allocated to them (i.e. 
share) and the evaluation of the effective resource usage consumed in the 
recent past.
Probably this issue may be solved by defining fair lease contracts between the 
user and the resource provider. What  do you think?
To make it more clear, for simplicity let's say that we have only two teams, A and B who are undertaking activities of Type3. The site administrator assigns 1000$ to 
the group A and just 100$ to B. Suppose A and B need both more resources at the same time. So, for sure B will be able to rent 
something only when A releases its resources or when A becomes poor. Instead, with a fair-share algorithm, B would be able to rent some resources because 
the purchasing power of A is adjusted by the recent trading.
At the moment I'm still not really sure that this approach can cover this use 
case.

thanks a lot.
Cheers,
Lisa




On Tue, Jul 8, 2014 at 4:18 AM, Lisa lisa.zangrando at pd.infn.it  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev wrote:


/   Hi Sylvain,

//
//
//  On 08/07/2014 09:29, Sylvain Bauza wrote:
//
//  Le 08/07/2014 00:35, Joe Gordon a écrit :
//
//
//  On Jul 7, 2014 9:50 AM, Lisa lisa.zangrando at pd.infn.it  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev wrote:
//  
//   Hi all,
//  
//   during the last IRC meeting, for better understanding our proposal (i.e
//  the FairShareScheduler), you suggested us to provide (for the tomorrow
//  meeting) a document which fully describes our use cases. Such document is
//  attached to this e-mail.
//   Any comment and feedback is welcome.
//
//  The attached document was very helpful, than you.
//
//  It sounds like Amazon's concept of spot instances ( as a user facing
//  abstraction) would solve your use case in its entirety. I see spot
//  instances as the general solution to the question of how to keep a cloud at
//  full utilization. If so then perhaps we can refocus this discussion on the
//  best way for Openstack to support Amazon style spot instances.
//
//
//
//
//  Can't agree more. Thanks Lisa for your use-cases, really helpful for
//  understand your concerns which are really HPC-based. If we want to
//  translate what you call Type 3 in a non-HPC world where users could compete
//  for a resource, spot instances model is coming to me as a clear model.
//
//
//  our model is similar to the Amazon's spot instances model because both try
//  to maximize the resource utilization. The main difference is the mechanism
//  used for assigning resources to the users (the user's offer in terms of
//  money vs the user's share). They differ even on how they release the
//  allocated resources. In our model, the user, whenever requires the creation
//  of a Type 3 VM, she has to select one of the possible types of life time
//  (short = 4 hours, medium = 24 hours, long = 48 hours). When the time
//  expires, the VM is automatically released (if not explicitly released by
//  the user).
//  Instead, in Amazon, the spot instance is released whenever the spot price
//  rises.
//
//
/I think you can adapt your model your use case to the spot instance model
by allocating different groups 'money' instead of a pre-defined share. If
one user tries to use more then there share they will run out of 'money.'
 Would that fully align the two models?

Also why pre-define the different life times for type 3 instances?


 /
//
//
//  I can see that you mention Blazar in your paper, and I appreciate this.
//  Climate (because that's the former and better known name) has been kick-off
//  because of such a rationale that you mention : we need to define a contract
//  (call it SLA if you wish) in between the user and the platform.
//  And you probably missed it, because I was probably unclear when we
//  discussed, but the final goal for Climate is *not* to have a start_date and
//  an end_date, but just *provide a contract in between the user and the
//  platform* (see
//  https://wiki.openstack.org/wiki/Blazar#Lease_types_.28concepts.29  )
//
//  Defining spot instances in OpenStack is a running question, each time
//  discussed when we presented Climate (now Blazar) at the Summits : what is
//  Climate

Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-07-08 Thread Lisa

Hi Sylvain,

On 08/07/2014 09:29, Sylvain Bauza wrote:

Le 08/07/2014 00:35, Joe Gordon a écrit :



On Jul 7, 2014 9:50 AM, Lisa lisa.zangra...@pd.infn.it 
mailto:lisa.zangra...@pd.infn.it wrote:


 Hi all,

 during the last IRC meeting, for better understanding our proposal 
(i.e the FairShareScheduler), you suggested us to provide (for the 
tomorrow meeting) a document which fully describes our use cases. 
Such document is attached to this e-mail.

 Any comment and feedback is welcome.

The attached document was very helpful, than you.

It sounds like Amazon's concept of spot instances ( as a user facing 
abstraction) would solve your use case in its entirety. I see spot 
instances as the general solution to the question of how to keep a 
cloud at full utilization. If so then perhaps we can refocus this 
discussion on the best way for Openstack to support Amazon style spot 
instances.






Can't agree more. Thanks Lisa for your use-cases, really helpful for 
understand your concerns which are really HPC-based. If we want to 
translate what you call Type 3 in a non-HPC world where users could 
compete for a resource, spot instances model is coming to me as a 
clear model.


our model is similar to the Amazon's spot instances model because both 
try to maximize the resource utilization. The main difference is the 
mechanism used for assigning resources to the users (the user's offer in 
terms of money vs the user's share). They differ even on how they 
release the allocated resources. In our model, the user, whenever 
requires the creation of a Type 3 VM, she has to select one of the 
possible types of life time (short = 4 hours, medium = 24 hours, long 
= 48 hours). When the time expires, the VM is automatically released (if 
not explicitly released by the user).
Instead, in Amazon, the spot instance is released whenever the spot 
price rises.





I can see that you mention Blazar in your paper, and I appreciate 
this. Climate (because that's the former and better known name) has 
been kick-off because of such a rationale that you mention : we need 
to define a contract (call it SLA if you wish) in between the user and 
the platform.
And you probably missed it, because I was probably unclear when we 
discussed, but the final goal for Climate is *not* to have a 
start_date and an end_date, but just *provide a contract in between 
the user and the platform* (see 
https://wiki.openstack.org/wiki/Blazar#Lease_types_.28concepts.29 )


Defining spot instances in OpenStack is a running question, each time 
discussed when we presented Climate (now Blazar) at the Summits : what 
is Climate? Is it something planning to provide spot instances ? Can 
Climate provide spot instances ?


I'm not saying that Climate (now Blazar) would be the only project 
involved for managing spot instances. By looking at a draft a couple 
of months before, I thought that this scenario would possibly involve 
Climate for best-effort leases (see again the Lease concepts in the 
wiki above), but also the Nova scheduler (for accounting the lease 
requests) and probably Ceilometer (for the auditing and metering side).


Blazar is now in a turn where we're missing contributors because we 
are a Stackforge project, so we work with a minimal bandwidth and we 
don't have time for implementing best-effort leases but maybe that's 
something we could discuss. If you're willing to contribute to an 
Openstack-style project, I'm personnally thinking Blazar is a good one 
because of its little complexity as of now.





Just few questions. We read your use cases and it seems you had some 
issues with the quota handling. How did you solved it?
About the Blazar's architecture 
(https://wiki.openstack.org/w/images/c/cb/Climate_architecture.png): the 
resource plug-in interacts even with the nova-scheduler?
Such scheduler has been (or will be) extended for supporting the 
Blazar's requests?

Which relationship there is between nova-scheduler and Gantt?

It would be nice to discuss with you in details.
Thanks a lot for your feedback.
Cheers,
Lisa





Thanks,
-Sylvain






 Thanks a lot.
 Cheers,
 Lisa

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org 
mailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

2014-07-01 Thread Lisa

Hi Tim,

for sure this is one of the main issues we are facing and the approach 
you suggested is the same we are investigating on.

Could you provide some details about the Heat proxy renew mechanism?
Thank you very much for your feedback.
Cheers,
Lisa


On 01/07/2014 08:46, Tim Bell wrote:

Eric,

Thanks for sharing your work, it looks like an interesting development.

I was wondering how the Keystone token expiry is handled since the tokens 
generally have a 1 day validity. If the request is scheduling for more than one 
day, it would no longer have a valid token. We have similar scenarios with 
Kerberos/AFS credentials in the CERN batch system. There are some interesting 
proxy renew approaches used by Heat to get tokens at a later date which may be 
useful for this problem.

$ nova credentials
+---+-+
| Token | Value   |
+---+-+
| expires   | 2014-07-02T06:39:59Z|
| id| 1a819279121f4235a8d85c694dea5e9e|
| issued_at | 2014-07-01T06:39:59.385417  |
| tenant| {id: 841615a3-ece9-4622-9fa0-fdc178ed34f8, enabled: true, |
|   | description: Personal Project for user timbell, name: |
|   | Personal timbell} |
+---+-+

Tim

-Original Message-
From: Eric Frizziero [mailto:eric.frizzi...@pd.infn.it]
Sent: 30 June 2014 16:05
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][scheduler] Proposal: FairShareScheduler.

Hi All,

we have analyzed the nova-scheduler component (FilterScheduler) in our
Openstack installation used by some scientific teams.

In our scenario, the cloud resources need to be distributed among the teams by
considering the predefined share (e.g. quota) assigned to each team, the portion
of the resources currently used and the resources they have already consumed.

We have observed that:
1) User requests are sequentially processed (FIFO scheduling), i.e.
FilterScheduler doesn't provide any dynamic priority algorithm;
2) User requests that cannot be satisfied (e.g. if resources are not
available) fail and will be lost, i.e. on that scenario nova-scheduler doesn't
provide any queuing of the requests;
3) OpenStack simply provides a static partitioning of resources among various
projects / teams (use of quotas). If project/team 1 in a period is 
systematically
underutilizing its quota and the project/team 2 instead is systematically
saturating its quota, the only solution to give more resource to project/team 2 
is
a manual change (to be done by the admin) to the related quotas.

The need to find a better approach to enable a more effective scheduling in
Openstack becomes more and more evident when the number of the user
requests to be handled increases significantly. This is a well known problem
which has already been solved in the past for the Batch Systems.

In order to solve those issues in our usage scenario of Openstack, we have
developed a prototype of a pluggable scheduler, named FairShareScheduler,
with the objective to extend the existing OpenStack scheduler (FilterScheduler)
by integrating a (batch like) dynamic priority algorithm.

The architecture of the FairShareScheduler is explicitly designed to provide a
high scalability level. To all user requests will be assigned a priority value
calculated by considering the share allocated to the user by the administrator
and the evaluation of the effective resource usage consumed in the recent past.
All requests will be inserted in a priority queue, and processed in parallel by 
a
configurable pool of workers without interfering with the priority order.
Moreover all significant information (e.g. priority queue) will be stored in a
persistence layer in order to provide a fault tolerance mechanism while a proper
logging system will annotate all relevant events, useful for auditing 
processing.

In more detail, some features of the FairshareScheduler are:
a) It assigns dynamically the proper priority to every new user requests;
b) The priority of the queued requests will be recalculated periodically using 
the
fairshare algorithm. This feature guarantees the usage of the cloud resources is
distributed among users and groups by considering the portion of the cloud
resources allocated to them (i.e. share) and the resources already consumed;
c) all user requests will be inserted in a (persistent) priority queue and then
processed asynchronously by the dedicated process (filtering + weighting phase)
when compute resources are available;
d) From the client point of view the queued requests remain in Scheduling
state till the compute resources

[openstack-dev] [barbican] Cryptography audit by OSSG

2014-04-18 Thread Lisa Clark
Barbicaneers,

   Is anyone following the openstack-security list and/or part of the
OpenStack Security Group (OSSG)?  This sounds like another group and list
we should keep our eyes on.

   In the below thread on the security list, Nathan Kinder is conducting a
security audit of the various integrated OpenStack projects.  He's
answering questions such as what crypto libraries are being used in the
projects, algorithms used, sensitive data, and potential improvements that
can be made.  Check the links out in the below thread.

   Though we're not yet integrated, it might be beneficial to put together
our security audit page under Security/Icehouse/Barbican.

   Another thing to consider as you're reviewing the security audit pages
of Keystone and Heat (and others as they are added): Would Barbican help
to solve any of the security concerns/issues that these projects are
experiencing?

-Lisa


Message: 5
Date: Thu, 17 Apr 2014 16:27:30 -0700
From: Nathan Kinder nkin...@redhat.com
To: Bryan D. Payne bdpa...@acm.org, Clark, Robert Graham
   robert.cl...@hp.com
Cc: openstack-secur...@lists.openstack.org
   openstack-secur...@lists.openstack.org
Subject: Re: [Openstack-security] Cryptographic Export Controls and
   OpenStack
Message-ID: 53506362.3020...@redhat.com
Content-Type: text/plain; charset=windows-1252

On 04/16/2014 10:28 AM, Bryan D. Payne wrote:
 I'm not aware of a list of the specific changes, but this seems quite
 related to the work that Nathan has started played with... discussed on
 his blog here:
 
 https://blog-nkinder.rhcloud.com/?p=51

This is definitely related to the security audit effort that I'm
driving.  It's hard to make recommendations on configurations and
deployment architectures from a security perspective when we don't even
have a clear picture of the current state of things are in the code from
a security standpoint.  This clear picture is what I'm trying to get to
right now (along with keeping this picture up to date so it doesn't get
stale).

Once we know things such as what crypto algorithms are used and how
sensitive data is being handled, we can see what is configurable and
make recommendations.  We'll surely find that not everything is
configurable and sensitive data isn't well protected in areas, which are
things that we can turn into blueprints and bugs and work on improving
in development.

It's still up in the air as to where this information should be
published once it's been compiled.  It might be on the wiki, or possibly
in the documentation (Security Guide seems like a likely candidate).
There was some discussion of this with the PTLs from the Project Meeting
from 2 weeks ago:


http://eavesdrop.openstack.org/meetings/project/2014/project.2014-04-08-21
.03.html

I'm not so worried myself about where this should be published, as that
doesn't matter if we don't have accurate and comprehensive information
collected in the first place.  My current focus is on the collection and
maintenance of this info on a project by project basis.  Keystone and
Heat have started, which is great!:

  https://wiki.openstack.org/wiki/Security/Icehouse/Keystone
  https://wiki.openstack.org/wiki/Security/Icehouse/Heat

If any other OSSG members are developers on any of the projects, it
would be great if you could help drive this effort within your project.

Thanks,
-NGK
 
 Cheers,
 -bryan
 
 
 
 On Tue, Apr 15, 2014 at 1:38 AM, Clark, Robert Graham
 robert.cl...@hp.com mailto:robert.cl...@hp.com wrote:
 
 Does anyone have a documented run-down of changes that must be made
 to OpenStack configurations to allow them to comply with EAR
 requirements?
 http://www.bis.doc.gov/index.php/policy-guidance/encryption
 
 It seems like something we should consider putting into the security
 guide. I realise that most of the time it?s just ?don?t use your own
 libraries, call to others, make algorithms configurable? etc but
 it?s a question I?m seeing more and more, the security guide?s
 compliance section looks like a great place to have something about
EAR.
 
 -Rob
 
 ___
 Openstack-security mailing list
 openstack-secur...@lists.openstack.org
 mailto:openstack-secur...@lists.openstack.org
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-security
 
 
 
 
 ___
 Openstack-security mailing list
 openstack-secur...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-security


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev