Re: [Openstack] [openstack-dev] [cinder] Proposal for Ollie Leahy to join cinder-core

2013-07-17 Thread Vladimir Popovski
I’m not a core member anymore, but I completely agree with John. The
company affiliation should not be the reason to deny somebody’s promotion
to the core team.

If core members from the particular company will try to influence the
project development in the wrong way – it will be a completely different
story.



Regards,

-Vladimir





*From:* Openstack [mailto:openstack-bounces+vladimir=
zadarastorage@lists.launchpad.net] *On Behalf Of *John Griffith
*Sent:* Wednesday, July 17, 2013 11:36 AM
*To:* Avishay Traeger
*Cc:* OpenStack Development Mailing List; Openstack (
openstack@lists.launchpad.net) (openstack@lists.launchpad.net)
*Subject:* Re: [Openstack] [openstack-dev] [cinder] Proposal for Ollie
Leahy to join cinder-core







On Wed, Jul 17, 2013 at 12:19 PM, Avishay Traeger avis...@il.ibm.com
wrote:

-1

I'm sorry to do that, and it really has nothing to do with Ollie or his
work (which I appreciate very much).  The main reason is that right now
Cinder core has 8 members:
1. Avishay Traeger (IBM)
2. Duncan Thomas (HP)
3. Eric Harney (RedHat)
4. Huang Zhiteng (Intel)
5. John Griffith (SolidFire)
6. Josh Durgin (Inktank)
7. Mike Perez (DreamHost)
8. Walt Boring (HP)

Adding another core team member from HP means that 1/3 of the core team is

from HP.  I believe that we should strive to have the core team be as
diverse as possible, with as many companies as possible represented (big
and small alike).  I think that's one of the keys to keeping a project
healthy and on the right track (nothing against HP - I would say the same
for IBM or any other company).  Further, we appointed two core members
fairly recently (Walt and Eric), and I don't feel that we have a shortage
at this time.

Again, nothing personal against Ollie, Duncan, HP, or anyone else.

Thanks,
Avishay



From:   Duncan Thomas duncan.tho...@gmail.com
To: Openstack (openstack@lists.launchpad.net)
(openstack@lists.launchpad.net)
openstack@lists.launchpad.net, OpenStack Development Mailing
List openstack-...@lists.openstack.org,
Date:   07/17/2013 06:18 PM
Subject:[openstack-dev] [cinder] Proposal for Ollie Leahy to join
cinder-core




Hi Everybody

I'd like to propose Ollie Leahy for cinder core. He has been doing
plenty of reviews and bug fixes, provided useful and tasteful negative
reviews (something often of far higher value than a +1) and has joined
in various design discussions.

Thanks

--
Duncan Thomas
Cinder Core, HP Cloud Services

___

OpenStack-dev mailing list
openstack-...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Just to point out a few things here, first off there is no guideline that
states a company affiliation should have anything to do with the decision
on voting somebody as core.  I have ABSOLUTELY NO concern about
representation of company affiliation what so ever.



Quite frankly I wouldn't mind if there were 20 core members from HP, if
they're all actively engaged and participating then that's great.  I don't
think there has been ANY incidence of folks exerting inappropriate
influence based on their affiliated interest, and if there ever was I think
it would be easy to identify and address.



As far as don't need more I don't agree with that either, if there are
folks contributing and doing the work then there's no reason not to add
them.  Cinder IMO does NOT have an excess of reviewers by a very very long
stretch.



The criteria here should be review consistency and quality as well as
knowledge of the project, nothing more nothing less.  If there's an
objection to the individuals participation or contribution that's fine, but
company affiliation should have no bearing.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] wsgi code duplication

2012-04-24 Thread Vladimir Popovski
Hi Ghe,



I suppose it will be very useful. We are planning to create a new project
for nova-volumes (cider?) and I’m sure it will have same duplicate classes.

Sooner we will have a common openstack layer is better.



Regards,

-Vladimir







*From:* openstack-bounces+vladimir=zadarastorage@lists.launchpad.net[mailto:
openstack-bounces+vladimir=zadarastorage@lists.launchpad.net] *On
Behalf Of *Ghe Rivero
*Sent:* Tuesday, April 24, 2012 7:28 AM
*To:* Mark McLoughlin
*Cc:* openstack
*Subject:* Re: [Openstack] wsgi code duplication





On Tue, Apr 24, 2012 at 2:40 PM, Mark McLoughlin mar...@redhat.com wrote:

Hi Ghe,


On Tue, 2012-04-24 at 12:15 +0200, Ghe Rivero wrote:
 Hi Everyone,
i've been looking through wsgi code, and i have found a lot of
 duplicated code between all the projects.

Thanks for looking into this. It sounds quite daunting.

I wonder could we do this iteratively by extract the code which is most
common into openstack-common, move the projects over to that and then
start again on the next layer?

Cheers,
Mark.



I have plans to try to move as much as possible into openstack-common. I
will start with nova as a test bed and see what we get from there. My
future plans include db code and tests (in the case of quantum, plugins
test also have a lot of duplicated code).

I register a bp for the wsgi issue:
https://blueprints.launchpad.net/openstack-common/+spec/wsgi-common



Ghe Rivero






-- 

Ghe Rivero
*OpenStack  Distribution Engineer
www.stackops.com | * ghe.riv...@stackops.com
diego.parri...@stackops.com | +34
625 63 45 23 | skype:ghe.rivero*
* http://www.stackops.com/



 ADVERTENCIA LEGAL 
Le informamos, como destinatario de este mensaje, que el correo electrónico
y las comunicaciones por medio de Internet no permiten asegurar ni
garantizar la confidencialidad de los mensajes transmitidos, así como
tampoco su integridad o su correcta recepción, por lo que STACKOPS
TECHNOLOGIES S.L. no asume responsabilidad alguna por tales circunstancias.
Si no consintiese en la utilización del correo electrónico o de las
comunicaciones vía Internet le rogamos nos lo comunique y ponga en nuestro
conocimiento de manera inmediata. Este mensaje va dirigido, de manera
exclusiva, a su destinatario y contiene información confidencial y sujeta
al secreto profesional, cuya divulgación no está permitida por la ley. En
caso de haber recibido este mensaje por error, le rogamos que, de forma
inmediata, nos lo comunique mediante correo electrónico remitido a nuestra
atención y proceda a su eliminación, así como a la de cualquier documento
adjunto al mismo. Asimismo, le comunicamos que la distribución, copia o
utilización de este mensaje, o de cualquier documento adjunto al mismo,
cualquiera que fuera su finalidad, están prohibidas por la ley.

* PRIVILEGED AND CONFIDENTIAL 
We hereby inform you, as addressee of this message, that e-mail and
Internet do not guarantee the confidentiality, nor the completeness or
proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES S.L.
does not assume any liability for those circumstances. Should you not agree
to the use of e-mail or to communications via Internet, you are kindly
requested to notify us immediately. This message is intended exclusively
for the person to whom it is addressed and contains privileged and
confidential information protected from disclosure by law. If you are not
the addressee indicated in this message, you should immediately delete it
and any attachments and notify the sender by reply e-mail. In such case,
you are hereby notified that any dissemination, distribution, copying or
use of this message or any attachments, for any purpose, is strictly
prohibited by law.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack:Summit] Nova Volume Unconference sessions

2012-04-17 Thread Vladimir Popovski
+1

-Original Message-
From: openstack-bounces+vladimir=zadarastorage@lists.launchpad.net
[mailto:openstack-bounces+vladimir=zadarastorage@lists.launchpad.net]
On Behalf Of John Griffith
Sent: Monday, April 16, 2012 4:21 PM
To: openstack@lists.launchpad.net
Subject: [Openstack] [Openstack:Summit] Nova Volume Unconference sessions

All,
For those of you that attended the Volume sessions at the summit this
morning (and those who may have missed it but would like to attend), I'd
like to continue our discussion tomorrow afternoon using the Unconference
sessions.

We've reserved 14:00 - 15:00 for continuation of the Volume spin out
discussion and 15:00 - 16:00 for Boot From Volume.  I'm working on a
specific set of goals/decisions to make during this time slots so that we
can begin moving forward and will have the etherpads updated tomorrow.

Thanks,
John

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Removal of VSA Code

2012-03-15 Thread Vladimir Popovski
Hi Vish  All,

We would definitely prefer to leave the code in place and ready to fix any
issues related to it.

We found out that it is extremely hard to work with latest trunk version -
our QA was constantly complaining about different regression scenarios and
we decided to stick with released stable Nova branches and perform merges
either after main releases or major milestones.

The current VSA code in trunk is completely functional  working. We've
done tons of enhancements to it and added new functionality during last
4-5 month. As I mentioned before our plan was to merge it with latest
relatively stable Essex code and to have this code in somewhere at the
beginning of Folsom.

At the same time we understand your concerns - without proper
documentation and related packages (like VSA image  drive discovery
packages) it is very hard to use/test it.
We are ready to collaborate with whoever is interested in order to make
this functionality generic enough. From our side we will try to fix it.

Please let us know what should we do in order to leave it in place.

Thanks,
-Vladimir
Zadara Storage


-Original Message-
From: openstack-bounces+vladimir=zadarastorage@lists.launchpad.net
[mailto:openstack-bounces+vladimir=zadarastorage@lists.launchpad.net]
On Behalf Of Vishvananda Ishaya
Sent: Wednesday, March 14, 2012 6:54 PM
To: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Subject: [Openstack] Removal of VSA Code

Apologies if you receive this email twice, I sent the first one from the
wrong address.

Hello Everyone,

Last week during the release meeting it was mentioned that the VSA code is
not working properly and we should either fix it or remove it.  I propose
to remove it for the following reasons:

* Lack of documentation -- unclear how to create a vsa image or how the
image should function
* Lack of support from vendors -- originally, the hope was other vendors
would use the vsa code to create their own virtual storage arrays
* Lack of functional testing -- this is the main reason the code has
bitrotted
* Lack of updates from original coders -- Zadara has mentioned a few times
that they were going to update the code but it has not happened
* Eases Transition to separate volume project -- This lowers the surface
area of the volume code and makes it easier to cleanly separate the volume
service to compute

As far as I can tell Zadara is maintaining a fork of the code for their
platform, so keeping the code in the public tree doesn't seem necessary.
I would be happy to see this code come back in Folsom if we get a stronger
commitment to keep it up-to-date, documented, and maintained, and there is
a reasonable location for it if the volume and compute code is separate.

If anyone disagrees, please respond ASAP.

Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Removal of VSA Code

2012-03-15 Thread Vladimir Popovski
Hi Vish,

I was not aware of any issue with VSA code in diablo/stable (or at least
major issues).
Of course, we've done tons of changes to it lately and probably there were
some bugs that were fixed later, but I'm sure this particular code is
fully functional.

We will need to look closer on planned separation of volume  compute code
and understand what will be the best place for our VSA code. Probably we
can discuss it during summit.

From my previous experience with merges - it will be way easier (at least
for us) to provide an upgrade for the code rather than to push a
completely new code.

Thanks,
-Vladimir


-Original Message-
From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
Sent: Thursday, March 15, 2012 8:57 AM
To: Vladimir Popovski
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Removal of VSA Code

Vladimir,

Are you sure the code in trunk is working? Investigation by one of the
community members showed that it was broken.  If you are planning on
merging your new code in, perhaps it is better to do it all at once?  FYI,
nova-volume is going to be split into its own service during folsom, and
based on the fact that vsa uses both compute and volume, it might be best
to actually move it into its own project as well.

This will keep separation of concerns and allow you to maintain it more
directly in the public. I still think the best approach for now is to pull
it and bring back the fully functional version in Folsom.  No one is using
it in its current state.

Vish

On Mar 15, 2012, at 8:48 AM, Vladimir Popovski wrote:

 Hi Vish  All,

 We would definitely prefer to leave the code in place and ready to fix
 any issues related to it.

 We found out that it is extremely hard to work with latest trunk
 version - our QA was constantly complaining about different regression
 scenarios and we decided to stick with released stable Nova branches
 and perform merges either after main releases or major milestones.

 The current VSA code in trunk is completely functional  working.
 We've done tons of enhancements to it and added new functionality
 during last
 4-5 month. As I mentioned before our plan was to merge it with latest
 relatively stable Essex code and to have this code in somewhere at the
 beginning of Folsom.

 At the same time we understand your concerns - without proper
 documentation and related packages (like VSA image  drive discovery
 packages) it is very hard to use/test it.
 We are ready to collaborate with whoever is interested in order to
 make this functionality generic enough. From our side we will try to fix
it.

 Please let us know what should we do in order to leave it in place.

 Thanks,
 -Vladimir
 Zadara Storage


 -Original Message-
 From: openstack-bounces+vladimir=zadarastorage@lists.launchpad.net
 [mailto:openstack-bounces+vladimir=zadarastorage.com@lists.launchpad.n
 et]
 On Behalf Of Vishvananda Ishaya
 Sent: Wednesday, March 14, 2012 6:54 PM
 To: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
 Subject: [Openstack] Removal of VSA Code

 Apologies if you receive this email twice, I sent the first one from
 the wrong address.

 Hello Everyone,

 Last week during the release meeting it was mentioned that the VSA
 code is not working properly and we should either fix it or remove it.
 I propose to remove it for the following reasons:

 * Lack of documentation -- unclear how to create a vsa image or how
 the image should function
 * Lack of support from vendors -- originally, the hope was other
 vendors would use the vsa code to create their own virtual storage
 arrays
 * Lack of functional testing -- this is the main reason the code has
 bitrotted
 * Lack of updates from original coders -- Zadara has mentioned a few
 times that they were going to update the code but it has not happened
 * Eases Transition to separate volume project -- This lowers the
 surface area of the volume code and makes it easier to cleanly
 separate the volume service to compute

 As far as I can tell Zadara is maintaining a fork of the code for
 their platform, so keeping the code in the public tree doesn't seem
necessary.
 I would be happy to see this code come back in Folsom if we get a
 stronger commitment to keep it up-to-date, documented, and maintained,
 and there is a reasonable location for it if the volume and compute code
is separate.

 If anyone disagrees, please respond ASAP.

 Vish
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Removal of VSA Code

2012-03-15 Thread Vladimir Popovski
If there is anything broken in Essex due to this code, please let me know
and we will take a look/fix it.

The main reason why we would like to have it in place - to make developers
aware that there is somebody relying on particular functionality and/or
particular function/module. Otherwise we will be in a constant merge
conflict and every time we will need to manually review almost any change
that is applied to the trunk and check if it anyhow affects/breaks it.

Thanks,
-Vladimir


-Original Message-
From: Kevin L. Mitchell [mailto:kevin.mitch...@rackspace.com]
Sent: Thursday, March 15, 2012 9:53 AM
To: Vladimir Popovski
Cc: Vishvananda Ishaya; openstack@lists.launchpad.net
Subject: Re: [Openstack] Removal of VSA Code

On Thu, 2012-03-15 at 09:02 -0700, Vladimir Popovski wrote:
 I was not aware of any issue with VSA code in diablo/stable (or at
 least major issues).

I'll point out that the code we're concerned about is the code in trunk, not
the code in diablo/stable.  There have been substantial changes to the code
since diablo was released, which has resulted in bitrot in the VSA code and
the attendant breakages to which Vish is referring.
--
Kevin L. Mitchell kevin.mitch...@rackspace.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Any block storage folks interested in getting together?

2012-02-14 Thread Vladimir Popovski
Folks,



We’ve used IRC #openstack-volumes for that and had a mailing-list set up
for the team. Check it out on https://launchpad.net/~openstack-volume



Unfortunately, there was not much going on lately. Primarily I suppose you
can blame me on this as our team was completely overwhelmed by
production/beta launch and we had no chance to contribute.



It will be great to get together and I’m looking forward for it.



Regards,

-Vladimir

Zadara Storage





*From:* openstack-bounces+vladimir=zadarastorage@lists.launchpad.net[mailto:
openstack-bounces+vladimir=zadarastorage@lists.launchpad.net] *On
Behalf Of *Oleg Gelbukh
*Sent:* Monday, February 13, 2012 9:24 PM
*To:* John Griffith
*Cc:* openstack@lists.launchpad.net
*Subject:* Re: [Openstack] Any block storage folks interested in getting
together?



Hello,



We are interested in participating. Looking forward to talk to all Nova
block storage developers.



--

Best regards,

Oleg

On Tue, Feb 14, 2012 at 2:31 AM, John Griffith john.griff...@solidfire.com
wrote:

Hi Bob,
Just pop into IRC: #openstack-meeting

John


On Mon, Feb 13, 2012 at 3:17 PM, Bob Van Zant b...@veznat.com wrote:
 I'm interested in joining in. I've never joined one of the calls before,
 where do I get more information on how to join?


 On Mon, Feb 13, 2012 at 12:06 PM, Diego Parrilla
 diego.parrilla.santama...@gmail.com wrote:

 Sounds great. We will try to join the meeting.

 Enviado desde mi iPad

 El 13/02/2012, a las 19:06, John Griffith john.griff...@solidfire.com
 escribió:

  There's been a lot of new work going on specific to Nova Volumes the
  past month or so.  I was thinking that it's been a long time since
  we've had a Nova-Volume team meeting and thought I'd see if there was
  any interest in trying to get together next week?  I'm open to
  suggestions regarding time slots but thought I'd propose our old slot,
  Thursday Feb 23, 18:00 - 19:00 UTC.
 
  Here's a proposed agenda:
 
 * Quick summary of new blueprints you have submitted and completed
  (or targeting for completion) in Essex
 * Any place folks might need some help with items they've targeted
  for Essex (see if we have any volunteers to help out if needed)
 * Any updates regarding BSaaS
 * Gauge interest in resurrecting a standing meeting, perhaps every 2
  weeks?
 
  If you have specific items that you'd be interested in
  sharing/discussing let me know.
 
  Thanks,
  John
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Duplicate ICMP due to public interface bridge being placed in promiscus mode

2011-10-14 Thread Vladimir Popovski
Vish,



We are not sure if this particular issue may cause any problem, but just
wanted to understand what is wrong.



To provide a bit more data about this particular environment:



-  Dedicated servers config at RAX

-  FlatDHCP mode with primary NIC (eth0) bridged and used for
fixed_range as well

-  Secondary nic (eth1) used for RabbitMQ/Glance/etc

-  Nova-network running on one node only



If we disable promiscuous mode, everything works fine – no DUPs. But in this
case VMs running on other nodes are unable to go outside (what is an
expected behavior in this case).



Here are some details of this config:



Nova.conf:



--s3_host=10.240.107.128

--rabbit_host=10.240.107.128

--ec2_host=10.240.107.128

--glance_api_servers=10.240.107.128:9292

--sql_connection=mysql://root:123@10.240.107.128/nova



--routing_source_ip=172.31.252.152

--fixed_range=172.31.252.0/22



--network_manager=nova.network.manager.FlatDHCPManager

--public_interface=br100



--multi_host=False

…



root@web1:/# ip addr show

1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet 169.254.169.254/32 scope link lo

inet6 ::1/128 scope host

   valid_lft forever preferred_lft forever

2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000

link/ether 84:2b:2b:5a:49:a0 brd ff:ff:ff:ff:ff:ff

inet6 fe80::862b:2bff:fe5a:49a0/64 scope link

   valid_lft forever preferred_lft forever

3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000

link/ether 84:2b:2b:5a:49:a2 brd ff:ff:ff:ff:ff:ff

inet 10.240.107.128/24 brd 10.240.107.255 scope global eth1

inet6 fe80::862b:2bff:fe5a:49a2/64 scope link

   valid_lft forever preferred_lft forever

4: eth2: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 1000

link/ether 84:2b:2b:5a:49:a4 brd ff:ff:ff:ff:ff:ff

5: eth3: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 1000

link/ether 84:2b:2b:5a:49:a6 brd ff:ff:ff:ff:ff:ff

6: virbr0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UNKNOWN

link/ether 02:bb:dd:bd:e6:ed brd ff:ff:ff:ff:ff:ff

inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

*8: br100: BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP mtu 1500 qdisc noqueue
state UNKNOWN *

*link/ether 84:2b:2b:5a:49:a0 brd ff:ff:ff:ff:ff:ff*

*inet 172.31.252.152/22 brd 172.31.255.255 scope global br100*

*inet 172.31.252.1/22 brd 172.31.255.255 scope global secondary br100*

*inet6 fe80::90db:ceff:fe33:450c/64 scope link *

*   valid_lft forever preferred_lft forever*

9: vnet0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state
UNKNOWN qlen 500

link/ether fe:16:3e:0e:11:6e brd ff:ff:ff:ff:ff:ff

inet6 fe80::fc16:3eff:fe0e:116e/64 scope link

   valid_lft forever preferred_lft forever



root@web1:/# brctl show

bridge name bridge id   STP enabled interfaces

br100   8000.842b2b5a49a0   no  eth0

vnet0

virbr0  8000.   yes





root@web1:/# ifconfig -a

br100 Link encap:Ethernet  HWaddr 84:2b:2b:5a:49:a0

  inet addr:172.31.252.152  Bcast:172.31.255.255  Mask:255.255.252.0

  inet6 addr: fe80::90db:ceff:fe33:450c/64 Scope:Link

  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1

  RX packets:6909 errors:0 dropped:621 overruns:0 frame:0

  TX packets:2634 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:0

  RX bytes:533738 (533.7 KB)  TX bytes:419886 (419.8 KB)



eth0  Link encap:Ethernet  HWaddr 84:2b:2b:5a:49:a0

  inet6 addr: fe80::862b:2bff:fe5a:49a0/64 Scope:Link

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  RX packets:6705 errors:0 dropped:0 overruns:0 frame:0

  TX packets:2667 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:1000

  RX bytes:598182 (598.1 KB)  TX bytes:447933 (447.9 KB)

  Interrupt:36 Memory:d600-d6012800



eth1  Link encap:Ethernet  HWaddr 84:2b:2b:5a:49:a2

  inet addr:10.240.107.128  Bcast:10.240.107.255  Mask:255.255.255.0

  inet6 addr: fe80::862b:2bff:fe5a:49a2/64 Scope:Link

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  RX packets:557 errors:0 dropped:0 overruns:0 frame:0

  TX packets:491 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:1000

  RX bytes:214973 (214.9 KB)  TX bytes:267663 (267.6 KB)

 Interrupt:48 Memory:d800-d8012800



eth2  Link encap:Ethernet  HWaddr 84:2b:2b:5a:49:a4

  BROADCAST MULTICAST  MTU:1500  Metric:1

  RX packets:0 errors:0 dropped:0 overruns:0 frame:0

  TX packets:0 errors:0 

[Openstack] Working group for nova-volume changes

2011-10-11 Thread Vladimir Popovski
Hi All,



As it was discussed during the Design Summit, there was a desire to create a
Working Group for Nova Volume changes.



If you would like to participate, pls feel free to add yourself to:

https://launchpad.net/~openstack-volume



and/or its mailing list:

openstack-vol...@lists.launchpad.net



Regards,

-Vladimir
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] volume_allocate_iscsi_target in db/sqlalchemy/api.py hits ResourceClosedError

2011-09-23 Thread Vladimir Popovski
Alex et al,



I’ve succeeded to reproduce this problem by issuing multiple concurrent
euca-allocate-address  euca-release-address calls.



Here is one of traces from nova-api.log:



2011-09-23 22:07:24,355 ERROR nova.api [3a5ae61d-4e6a-4701-9180-ef6c49a76c61
diego nubeblog] Unexpected error raised: ResourceClosedError This result
object does not return rows. It has been closed automatically.

[u'Traceback (most recent call last):\n', u'  File
/mnt/share/s-cloud-Vlad/nova/rpc/impl_kombu.py, line 620, in
_process_data\nrval = node_func(context=ctxt, **node_args)\n', u'  File
/mnt/share/s-cloud-Vlad/nova/network/manager.py, line 279, in
allocate_floating_ip\nproject_id)\n', u'  File
/mnt/share/s-cloud-Vlad/nova/db/api.py, line 232, in
floating_ip_allocate_address\nreturn
IMPL.floating_ip_allocate_address(context, project_id)\n', u'  File
/mnt/share/s-cloud-Vlad/nova/db/sqlalchemy/api.py, line 119, in
wrapper\nreturn f(*args, **kwargs)\n', u'  File
/mnt/share/s-cloud-Vlad/nova/db/sqlalchemy/api.py, line 500, in
floating_ip_allocate_address\nwith_lockmode(\'update\').\\\n', u'  File
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 1535, in
first\nret = list(self[0:1])\n', u'  File
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 1444, in
__getitem__\nreturn list(res)\n', u'  File
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 1708, in
instances\nfetch = cursor.fetchall()\n', u'  File
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 2493, in
fetchall\nl = self.process_rows(self._fetchall_impl())\n', u'  File
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 2462, in
_fetchall_impl\nself._non_result()\n', u'  File
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 2467, in
_non_result\nThis result object does not return rows. \n',
u'ResourceClosedError: This result object does not return rows. It has been
closed automatically.\n']

(nova.api): TRACE: Traceback (most recent call last):

(nova.api): TRACE:   File
/mnt/share/s-cloud-Vlad/nova/api/ec2/__init__.py, line 398, in __call__

(nova.api): TRACE: result = api_request.invoke(context)

(nova.api): TRACE:   File
/mnt/share/s-cloud-Vlad/nova/api/ec2/apirequest.py, line 78, in invoke

(nova.api): TRACE: result = method(context, **args)

(nova.api): TRACE:   File /mnt/share/s-cloud-Vlad/nova/api/ec2/cloud.py,
line 1302, in allocate_address

(nova.api): TRACE: public_ip =
self.network_api.allocate_floating_ip(context)

(nova.api): TRACE:   File /mnt/share/s-cloud-Vlad/nova/network/api.py,
line 61, in allocate_floating_ip

(nova.api): TRACE: 'args': {'project_id': context.project_id}})

(nova.api): TRACE:   File /mnt/share/s-cloud-Vlad/nova/rpc/__init__.py,
line 45, in call

(nova.api): TRACE: return get_impl().call(context, topic, msg)

(nova.api): TRACE:   File /mnt/share/s-cloud-Vlad/nova/rpc/impl_kombu.py,
line 739, in call

(nova.api): TRACE: rv = list(rv)

(nova.api): TRACE:   File /mnt/share/s-cloud-Vlad/nova/rpc/impl_kombu.py,
line 703, in __iter__

(nova.api): TRACE: raise result

(nova.api): TRACE: RemoteError: ResourceClosedError This result object does
not return rows. It has been closed automatically.





The problem seems to be related to .with_lockmode('update') functionality of
sqlalchemy. It seems to raise ResourceClosedError exception if several
threads are trying to perform the same operation on the DB instead of
waiting on it.



Regards,

-Vladimir





*From:* Alex Lyakas [mailto:a...@zadarastorage.com]
*Sent:* Wednesday, September 21, 2011 5:27 AM
*To:* openstack@lists.launchpad.net
*Cc:* Yair Hershko; Vladimir Popovsky
*Subject:* volume_allocate_iscsi_target in db/sqlalchemy/api.py hits
ResourceClosedError



Greetings everybody,

in one of our tests we occasionally hit

ResourceClosedError when executing the volume_allocate_iscsi_target:



2011-09-20 04:25:20,515 DEBUG nova.volume.driver [-]
volume_allocate_iscsi_target failed for volume volume-114c from
(pid=23701) create_export /mnt/share/s-cloud-Vlad/nova/volume/driver.py:959



2011-09-20 04:25:20,515 WARNING nova.volume.manager [-] Volume creation
failed with traceback: traceback object at 0x3c55878



2011-09-20 04:25:20,643 ERROR nova.rpc [-] Exception during message handling

(nova.rpc): TRACE: Traceback (most recent call last):

(nova.rpc): TRACE:   File /mnt/share/s-cloud-Vlad/nova/rpc/impl_kombu.py,
line 620, in _process_data

(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)

(nova.rpc): TRACE:   File /mnt/share/s-cloud-Vlad/nova/volume/manager.py,
line 140, in create_volume

(nova.rpc): TRACE: raise exc_info

(nova.rpc): TRACE: ResourceClosedError

(nova.rpc): TRACE:



In one of the tests it happened at the same time on three different compute
nodes.

Can anybody pls advise how to investigate the root cause of this problem?
The MySQL server logs do not show anything useful at the time of 

Re: [Openstack] Adding Volume Types to Nova Volume

2011-07-18 Thread Vladimir Popovski
Hi Chuck,

This idea is very good aligned with some functionality we intend to
propose as part of our Virtual Storage Array feature.

I suppose in general the goal here is to be able to create a volume from
storage of specified class. For me it means that:

1. User should be able to select storage type from some list of allowed
drive types
2. Scheduler-related:
- Scheduler should find the appropriate storage node
- Nova-volume service should collect node capabilities and report
to schedulers
3. Storage node (SN) running nova-volume service should be able to select
an appropriate driver for managing this request

In our VSA proposal we've introduced a new table of drive types with all
methods for controlling it. These types are currently used during VSA
creation process. User specifies what storage should be allocated for VSA
instances and such request automatically remapped through scheduler to
appropriate SN nodes. In our first version of APIs we extended os-volumes
API extensions and allowed drive_type selection there, but later reverted
it and created our own set of volume APIs.

As you know the current nova-volume driver mechanism doesn't allow
coexistence of drivers from several vendors/types. Especially, if you
would like to support standard volumes and new enhanced ones at the
same time.
Same is true for heterogeneous storage connected to the cloud managed by
different drivers. There are multiple possible ways of supporting multiple
types of volumes, including creation of derived classes and calling parent
class for foreign volumes, but we've decided to create a small wrapper
inside of nova-volume manager for picking up the correct driver. This
approach was not generalized yet, but it might be a good starting point.

Easily we could extend the table of drive types and add something like
driver name there or ... just add an additional drive_type field to
volume_api.create API.

We recently published our code for VSA proposal here -
lp:~vladimir.p/nova/vsa

It also includes several schedulers looking at drive_type field in volume
record and reverting to base class (SimpleScheduler) for regular volumes.
These schedulers are derived from SimpleScheduler, but using some aspects
of zone-aware scheduler including capabilities reporting.

We also have our own set of packages for recognizing different HW
capabilities and drive types, but this code may differ from vendor to
vendor.

Regards,
-Vladimir


-Original Message-
From: openstack-bounces+vladimir=zadarastorage@lists.launchpad.net
[mailto:openstack-bounces+vladimir=zadarastorage@lists.launchpad.net]
On Behalf Of Chuck Thier
Sent: Monday, July 18, 2011 4:06 PM
To: Openstack
Subject: [Openstack] Adding Volume Types to Nova Volume

There are two concepts that I would like Nova Volumes to support:

1.  Allow different storage classes within a storage driver.  For example,
in our case we will have some nodes with high iops capabilities and other
nodes for cheaper/larger  volumes.

2.  Allow for different storage backends to be used and specified by the
user.  For example, you might want to use both the Lunr volume backend,
and one of the SAN backends.

I think having the idea of a volume type when creating a volume would
support both of these features.  I have started a blueprint and spec
below, and would like to solicit feedback.

Blueprint: https://blueprints.launchpad.net/nova/+spec/volume-type
Spec: http://etherpad.openstack.org/volume-type

Please let me know what you think, and if you have any questions.

--
Chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Injecting user data into instances

2011-06-09 Thread Vladimir Popovski
Hi All,



Thanks for answers. Injecting files into running instance is a very
interesting topic, but it is not our goal (at least for now).

I was just curious seeing these APIs and guessed that those are placeholders
(as Ed confirmed).



However, what we are missing is the way of injecting data provided in
user_data arg.



Could you please point us to the code where it is actually implemented?



Seems like similarly to injecting keys and networking info there should be a
code of placing user_data into nbd-mouted device.

But I don’t see it.



Another alternative might be to attach a new device (similarly to attaching
volumes) and in this case autostart script will be fired after mounting this
device … but we can’t find this code neither.



Thanks,

-Vladimir









*From:* Vishvananda Ishaya [mailto:vishvana...@gmail.com]
*Sent:* Wednesday, June 08, 2011 9:46 PM
*To:* Vladimir Popovski
*Cc:* openstack@lists.launchpad.net
*Subject:* Re: [Openstack] Injecting user data into instances



User data is provided to the vm through the ec2 metadata url.  It is not
touched by the hypervisor at all.  It does work for regular user data and
for cloudpipe.  The smoketests also verify that user data is working in the
SecurityGroupTests by passing in a script proxy script that runs on boot.



Vish



On Jun 8, 2011, at 5:36 PM, Vladimir Popovski wrote:



Folks,



Have anybody tried to inject user data into instances? Or if anybody
actually tried to use Cloudpipe / VPN functionality?



It seems like there is some code missing (at least on libvirt/connection
level).



If I’m not missing anything, EC2 RunInstances takes user_data from kwargs
arguments and provides it to compute_api, who stores it in base_options /
instances table.

Cloudpipe’s launch_vpn_instance also goes through the same path. However,
there is no any parser of user_data field on compute manager / driver level.



For example, if we will look at spawn implementation in libvirt:



it calls _create_image(instance, …

, who calls

disk.inject_data(basepath('disk'), key, net, partition=target_partition,
nbd=FLAGS.use_cow_images)

where image is mounted as nbd device and key/net information
is inserted by



inject_data_into_fs(tmpdir, key, net, utils.execute)

_inject_key_into_fs

_inject_net_into_fs



It seems reasonable to pass user data to disk.inject_data and
inject_data_into_fs and inject it into FS as well, but there is no such code
…



Or am I missing anything?





Another interesting situation is with inject_file compute APIs  …



on API level there is no even file/contents fields, only

def inject_file(self, context, instance_id):

but they exist on compute.manager level:

def inject_file(self, context, instance_id, path, file_contents):





Thanks,

-Vladimir

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Injecting user data into instances

2011-06-08 Thread Vladimir Popovski
Folks,



Have anybody tried to inject user data into instances? Or if anybody
actually tried to use Cloudpipe / VPN functionality?



It seems like there is some code missing (at least on libvirt/connection
level).



If I’m not missing anything, EC2 RunInstances takes user_data from kwargs
arguments and provides it to compute_api, who stores it in base_options /
instances table.

Cloudpipe’s launch_vpn_instance also goes through the same path. However,
there is no any parser of user_data field on compute manager / driver level.



For example, if we will look at spawn implementation in libvirt:



it calls _create_image(instance, …

, who calls

disk.inject_data(basepath('disk'), key, net, partition=target_partition,
nbd=FLAGS.use_cow_images)

where image is mounted as nbd device and key/net information
is inserted by



inject_data_into_fs(tmpdir, key, net, utils.execute)

_inject_key_into_fs

_inject_net_into_fs



It seems reasonable to pass user data to disk.inject_data and
inject_data_into_fs and inject it into FS as well, but there is no such code
…



Or am I missing anything?





Another interesting situation is with inject_file compute APIs  …



on API level there is no even file/contents fields, only

def inject_file(self, context, instance_id):

but they exist on compute.manager level:

def inject_file(self, context, instance_id, path, file_contents):





Thanks,

-Vladimir
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp