Re: [openstack-dev] [cinder] PTL Non-Candidacy

2015-09-15 Thread Nikesh Kumar Mahalka
Thanks Mike,
It was really a good experience working with you in kilo and liberty.



Regards
Nikesh

On Tue, Sep 15, 2015 at 1:21 PM, Silvan Kaiser  wrote:

> Thanks Mike!
> That was really demanding work!
>
> 2015-09-15 9:27 GMT+02:00 陈莹 :
>
>> Thanks Mike. Thank you for doing a great job.
>>
>>
>> > From: sxmatch1...@gmail.com
>> > Date: Tue, 15 Sep 2015 10:05:22 +0800
>> > To: openstack-dev@lists.openstack.org
>> > Subject: Re: [openstack-dev] [cinder] PTL Non-Candidacy
>>
>> >
>> > Thanks Mike ! Your help is very important to me to get started in
>> > cinder and we do a lot of proud work with your leadership.
>> >
>> > 2015-09-15 6:36 GMT+08:00 John Griffith :
>> > >
>> > >
>> > > On Mon, Sep 14, 2015 at 11:02 AM, Sean McGinnis <
>> sean.mcgin...@gmx.com>
>> > > wrote:
>> > >>
>> > >> On Mon, Sep 14, 2015 at 09:15:44AM -0700, Mike Perez wrote:
>> > >> > Hello all,
>> > >> >
>> > >> > I will not be running for Cinder PTL this next cycle. Each cycle I
>> ran
>> > >> > was for a reason [1][2], and the Cinder team should feel proud of
>> our
>> > >> > accomplishments:
>> > >>
>> > >> Thanks for a couple of awesome cycles Mike!
>> > >>
>> > >>
>> __
>> > >> OpenStack Development Mailing List (not for usage questions)
>> > >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > > You did a fantastic job Mike, thank you very much for the hard work
>> and
>> > > dedication.
>> > >
>> > >
>> > >
>> __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> >
>> >
>> > --
>> > Best Wishes For You!
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Dr. Silvan Kaiser
> Quobyte GmbH
> Hardenbergplatz 2, 10623 Berlin - Germany
> +49-30-814 591 800 - www.quobyte.com
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>
>
> --
> *Quobyte* GmbH
> Hardenbergplatz 2 - 10623 Berlin - Germany
> +49-30-814 591 800 - www.quobyte.com
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] PTL Candidacy

2015-09-15 Thread Nikesh Kumar Mahalka
Thanks Sean, Vote +1.

On Tue, Sep 15, 2015 at 8:36 AM, hao wang  wrote:

> Thanks Sean, Vote +1.
>
> 2015-09-14 22:49 GMT+08:00 Sean McGinnis :
> > Hello everyone,
> >
> > I'm announcing my candidacy for Cinder PTL for the Mitaka release.
> >
> > The Cinder team has made great progress. We've not only grown the
> > number of supported backend drivers, but we've made significant
> > improvements to the core code and raised the quality of existing
> > and incoming code contributions. While there are still many things
> > that need more polish, we are headed in the right direction and
> > block storage is a strong, stable component to many OpenStack clouds.
> >
> > Mike and John have provided the leadership to get the project where
> > it is today. I would like to keep that momentum going.
> >
> > I've spent over a decade finding new and interesting ways to create
> > and delete volumes. I also work across many different product teams
> > and have had a lot of experience collaborating with groups to find
> > a balance between the work being done to best benefit all involved.
> >
> > I think I can use this experience to foster collaboration both within
> > the Cinder team as well as between Cinder and other related projects
> > that interact with storage services.
> >
> > Some topics I would like to see focused on for the Mitaka release
> > would be:
> >
> >  * Complete work of making the Cinder code Python3 compatible.
> >  * Complete conversion to objects.
> >  * Sort out object inheritance and appropriate use of ABC.
> >  * Continued stabilization of third party CI.
> >  * Make sure there is a good core feature set regardless of backend type.
> >  * Reevaluate our deadlines to make sure core feature work gets enough
> >time and allows drivers to implement support.
> >
> > While there are some things I think we need to do to move the project
> > forward, I am mostly open to the needs of the community as a whole
> > and making sure that what we are doing is benefiting OpenStack and
> > making it a simpler, easy to use, and ubiquitous platform for the
> > cloud.
> >
> > Thank you for your consideration!
> >
> > Sean McGinnis (smcginnis)
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Best Wishes For You!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Vedams' DotHill, Lenovo and HP MSA CIs are Unstable

2015-07-20 Thread Nikesh Kumar Mahalka
Hi Mike,
We have moved all CIs on cinder patches after testing on sandbox.

We will be in touch with infra-team and cinder team to make CIs more robust
and spam-free.

Regards
Nikesh

On Fri, Jul 17, 2015 at 9:45 PM, Nikesh Kumar Mahalka 
nikeshmaha...@vedams.com wrote:

 Hi Mike,
 We are taking it as a high priority.
 We have moved all CIs to sand-box for making sure that they do not
 generate spam.

 We will move these CIs to cinder patches ASAP and will update the wiki
 page of third party CI.

 Please let me know if you have any questions.

 Regards
 Nikesh

 On Wed, Jul 15, 2015 at 4:21 AM, Mike Perez thin...@gmail.com wrote:

 These three CIs are unstable and the drivers are in danger of being
 removed
 from the Liberty release since the maintainer has not communicated any
 maintenance happening.

 http://paste.openstack.org/show/375584/
 http://paste.openstack.org/show/375585/
 http://paste.openstack.org/show/375586/

 I will be requesting the HP MSA CI to be disabled in the third party list,
 since it has failed the last 60 runs in the last 5 days.

 This is unacceptable and we need Vedams to be more on top of this.

 --
 Mike Perez



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Vedams' DotHill, Lenovo and HP MSA CIs are Unstable

2015-07-17 Thread Nikesh Kumar Mahalka
Hi Mike,
We are taking it as a high priority.
We have moved all CIs to sand-box for making sure that they do not generate
spam.

We will move these CIs to cinder patches ASAP and will update the wiki page
of third party CI.

Please let me know if you have any questions.

Regards
Nikesh

On Wed, Jul 15, 2015 at 4:21 AM, Mike Perez thin...@gmail.com wrote:

 These three CIs are unstable and the drivers are in danger of being removed
 from the Liberty release since the maintainer has not communicated any
 maintenance happening.

 http://paste.openstack.org/show/375584/
 http://paste.openstack.org/show/375585/
 http://paste.openstack.org/show/375586/

 I will be requesting the HP MSA CI to be disabled in the third party list,
 since it has failed the last 60 runs in the last 5 days.

 This is unacceptable and we need Vedams to be more on top of this.

 --
 Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Rebranded Volume Drivers

2015-06-11 Thread Nikesh Kumar Mahalka
Hi,
We now have a working CI on below patches:
https://review.openstack.org/#/c/187707/
https://review.openstack.org/#/c/187853/

@jgriffith: we will sure start to give back to community.Thanks for
pointing this out.

Regards
Nikesh

On Thu, Jun 4, 2015 at 1:46 PM, Alex Meade mr.alex.me...@gmail.com wrote:

 Agreed, I'd also like to mention that rebranded arrays may differ slightly
 in functionality as well so the CIs would need to run against a physical
 rebranded device. These differences also justify the need for letting
 rebranded drivers in.

 -Alex

 On Thu, Jun 4, 2015 at 4:41 PM, Mike Perez thin...@gmail.com wrote:

 Sounds like the community would like CI's regardless, and I agree.

 Just because the driver code works for one backend solution, doesn't
 mean it's going to work with some other.

 Lets continue with code reviews with these patches only if they have a
 CI reporting, unless someone has a compelling reason we should not let
 any rebranded drivers in.

 --
 Mike Perez


 On Wed, Jun 3, 2015 at 10:32 AM, Mike Perez thin...@gmail.com wrote:
  There are a couple of cases [1][2] I'm seeing where new Cinder volume
  drivers for Liberty are rebranding other volume drivers. This involves
  inheriting off another volume driver's class(es) and providing some
  config options to set the backend name, etc.
 
  Two problems:
 
  1) There is a thought of no CI [3] is needed, since you're using
  another vendor's driver code which does have a CI.
 
  2) IMO another way of satisfying a check mark of being OpenStack
  supported and disappearing from the community.
 
  What gain does OpenStack get from these kind of drivers?
 
  Discuss.
 
  [1] - https://review.openstack.org/#/c/187853/
  [2] - https://review.openstack.org/#/c/187707/4
  [3] - https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
 
  --
  Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] what code in cinder volume driver supports volume migration between two backends of same type but having different volume types? [cinder]

2015-03-01 Thread Nikesh Kumar Mahalka
Thanks,
I think if these informations are added in openstack documents having topic
volume migration then it would be good for new folks in cinder.

Regards
Nikesh

On Sun, Mar 1, 2015 at 10:12 PM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 Migrate - move between backends of the same volume type
 Retype - move between types. Will migrate the volume for you if necessary

 On 1 March 2015 at 09:40, Avishay Traeger avis...@stratoscale.com wrote:

 Nikesh,
 The case you are trying is supposed to fail.  You have a volume of type
 dothill_realstor1 which is defined to say this volume must be on backend
 DotHill_RealStor1.  This is a requirement that you defined for that
 volume.  Now you want to migrate it to realstor2, which is a violation of
 the requirement that you specified.  To migrate it, you should change the
 volume type (retype), which changes the requirement.

 Thanks,
 Avishay

 On Sat, Feb 28, 2015 at 11:02 AM, Nikesh Kumar Mahalka 
 nikeshmaha...@vedams.com wrote:

 I tried below link  for volume migration on my driver and also similar
 efforts for LVM.
 Even whatever documents available in openstack for
 volume-migration,each one showing volume migration of a volume having
 volume type None

 I added host assisted volume migration function in my cinder driver.
 When i am trying volume migration on a volume without volume type,then
 my volume migration function is getting called and i  am able to do
 volume migration.

 But when i am trying volume migration on a volume having volume
 type,then my volume migration function is not getting called.


 http://paste.openstack.org/show/183392/
 http://paste.openstack.org/show/183405/



 On Tue, Jan 20, 2015 at 12:31 AM, Nikesh Kumar Mahalka
 nikeshmaha...@vedams.com wrote:
  do cinder retype (v2) works for lvm?
  How to use cinder retype?
 
  I tried for volume migration from one volume-type LVM backend to
  another volume-type LVM backend.But its failed.
  How can i acheive this?
 
  Similarly i am writing a cinder volume driver for my array and want to
  migrate volume from one volume type to another volume type for my
  array backends.
  so want to know how can i achieve this in cinder driver?
 
 
 
  Regards
  Nikesh


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *Avishay Traeger*
 *Storage RD*

 Mobile: +972 54 447 1475
 E-mail: avis...@stratoscale.com



 Web http://www.stratoscale.com/ | Blog
 http://www.stratoscale.com/blog/ | Twitter
 https://twitter.com/Stratoscale | Google+
 https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts
  | Linkedin https://www.linkedin.com/company/stratoscale

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Duncan Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] volume's current host in retype [cinder]

2015-03-01 Thread Nikesh Kumar Mahalka
Hi,
i was trying to understand below patch:
https://review.openstack.org/#/c/44881/24

What volume's current host means in this patch?
I want to understand it with some examples.






Regards
Nikesh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] what code in cinder volume driver supports volume migration between two backends of same type but having different volume types? [cinder]

2015-02-28 Thread Nikesh Kumar Mahalka
I tried below link  for volume migration on my driver and also similar
efforts for LVM.
Even whatever documents available in openstack for
volume-migration,each one showing volume migration of a volume having
volume type None

I added host assisted volume migration function in my cinder driver.
When i am trying volume migration on a volume without volume type,then
my volume migration function is getting called and i  am able to do
volume migration.

But when i am trying volume migration on a volume having volume
type,then my volume migration function is not getting called.


http://paste.openstack.org/show/183392/
http://paste.openstack.org/show/183405/



On Tue, Jan 20, 2015 at 12:31 AM, Nikesh Kumar Mahalka
nikeshmaha...@vedams.com wrote:
 do cinder retype (v2) works for lvm?
 How to use cinder retype?

 I tried for volume migration from one volume-type LVM backend to
 another volume-type LVM backend.But its failed.
 How can i acheive this?

 Similarly i am writing a cinder volume driver for my array and want to
 migrate volume from one volume type to another volume type for my
 array backends.
 so want to know how can i achieve this in cinder driver?



 Regards
 Nikesh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] what code in cinder volume driver supports volume migration between two backends of same type but having different volume types?

2015-01-19 Thread Nikesh Kumar Mahalka
do cinder retype (v2) works for lvm?
How to use cinder retype?

I tried for volume migration from one volume-type LVM backend to
another volume-type LVM backend.But its failed.
How can i acheive this?

Similarly i am writing a cinder volume driver for my array and want to
migrate volume from one volume type to another volume type for my
array backends.
so want to know how can i achieve this in cinder driver?



Regards
Nikesh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sos-ci for cinder scst

2015-01-16 Thread Nikesh Kumar Mahalka
Hi
*,*
localconf.base file in sos-ci/sos-ci/tempates have


*CINDER_BRANCH = master*


*volume_driver=cinder.volume.drivers.solidfire.SolidFireDriver*
similarly in our localconf.base file,we have






*CINDER_BRANCH = master[[post-config|$CINDER_CONF]] [lvmdriver-1]
iscsi_helper=scstadminvolume_driver =
cinder.volume.drivers.lvm.LVMISCSIDriver*

when sos-ci launch instance and try to install
devstack with *CINDER_BRANCH=gerrit event patch reference*,cinder-volume
service is unable to start.
Because our code is not in master for this local.conf to be run by
LVMISCSIDriver.



As far we know,we should not give* CINDER_BRANCH=refs/changes/78/145778/1*
in our localconf.base,because sos-ci is setting CINDER_BRANCH with gerrit
event stream events patch reference.



Regards
Nikesh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Kilo devstack issue

2015-01-12 Thread Nikesh Kumar Mahalka
Hi,
We deployed a kilo devstack on ubuntu 14.04 server.
We successfully launched a instance from dashboard, but we are unable to
open console from dashboard for instance.Also instacne is unable to get ip

Below is link for local.conf
http://paste.openstack.org/show/156497/



Regards
Nikesh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cinder.brick.initiator

2014-11-06 Thread Nikesh Kumar Mahalka
What are the volume operations which will touch code base in
cinder.brick.initiator?

I am using a LVMISCSIDriver and tgtadm as a iscsi_helper.
I want to use hardware acceleration for iscsi target on cinder block
storage node.


Any help or suggestion will be really helpful.





Regards
Nikesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] No one replying on tempest issue?Please share your experience

2014-09-29 Thread Nikesh Kumar Mahalka
How to get nova-compute logs in juno devstack?
Below are nova services:
vedams@vedams-compute-fc:/opt/stack/tempest$ ps -aef | grep nova
vedams   15065 14812  0 10:56 pts/10   00:00:52 /usr/bin/python
/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
vedams   15077 14811  0 10:56 pts/900:02:06 /usr/bin/python
/usr/local/bin/nova-api
vedams   15086 14818  0 10:56 pts/12   00:00:09 /usr/bin/python
/usr/local/bin/nova-cert --config-file /etc/nova/nova.conf
vedams   15095 14836  0 10:56 pts/17   00:00:09 /usr/bin/python
/usr/local/bin/nova-consoleauth --config-file /etc/nova/nova.conf
vedams   15096 14821  0 10:56 pts/13   00:00:09 /usr/bin/python
/usr/local/bin/nova-network --config-file /etc/nova/nova.conf
vedams   15100 14844  0 10:56 pts/18   00:00:00 /usr/bin/python
/usr/local/bin/nova-objectstore --config-file /etc/nova/nova.conf
vedams   15101 14826  0 10:56 pts/15   00:00:05 /usr/bin/python
/usr/local/bin/nova-novncproxy --config-file /etc/nova/nova.conf --web
/opt/stack/noVNC
vedams   15103 14814  0 10:56 pts/11   00:02:02 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15104 14823  0 10:56 pts/14   00:00:11 /usr/bin/python
/usr/local/bin/nova-scheduler --config-file /etc/nova/nova.conf
vedams   15117 14831  0 10:56 pts/16   00:00:00 /usr/bin/python
/usr/local/bin/nova-xvpvncproxy --config-file /etc/nova/nova.conf
vedams   15195 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15196 15103  0 10:56 pts/11   00:00:25 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15197 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15198 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15208 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   15209 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   15238 15077  0 10:56 pts/900:00:03 /usr/bin/python
/usr/local/bin/nova-api
vedams   15239 15077  0 10:56 pts/900:00:01 /usr/bin/python
/usr/local/bin/nova-api
vedams   15240 15077  0 10:56 pts/900:00:03 /usr/bin/python
/usr/local/bin/nova-api
vedams   15241 15077  0 10:56 pts/900:00:03 /usr/bin/python
/usr/local/bin/nova-api
vedams   15248 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   15249 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   21850 14712  0 16:16 pts/000:00:00 grep --color=auto nova


Below are nova logs files:
vedams@vedams-compute-fc:/opt/stack/tempest$ ls
/opt/stack/logs/screen/screen-n-
screen-n-api.2014-09-28-101810.logscreen-n-cond.log
screen-n-net.2014-09-28-101810.logscreen-n-obj.log
screen-n-api.log  screen-n-cpu.2014-09-28-101810.log
screen-n-net.log  screen-n-sch.2014-09-28-101810.log
screen-n-cauth.2014-09-28-101810.log  screen-n-cpu.log
screen-n-novnc.2014-09-28-101810.log  screen-n-sch.log
screen-n-cauth.logscreen-n-crt.2014-09-28-101810.log
screen-n-novnc.logscreen-n-xvnc.2014-09-28-101810.log
screen-n-cond.2014-09-28-101810.log   screen-n-crt.log
screen-n-obj.2014-09-28-101810.logscreen-n-xvnc.log


Below  are nova screen-seesions:
6-$(L) n-api  7$(L) n-cpu  8$(L) n-cond  9$(L) n-crt  10$(L) n-net  11$(L)
n-sch  12$(L) n-novnc  13$(L) n-xvnc  14$(L) n-cauth  15$(L) n-obj




Regards
Nikesh


On Tue, Sep 23, 2014 at 3:10 PM, Nikesh Kumar Mahalka 
nikeshmaha...@vedams.com wrote:

 Hi,
 I am able to do all volume operations through dashboard and cli commands.
 But when i am running tempest tests,some tests are getting failed.
 For contributing cinder volume driver for my client in cinder,do all
 tempest tests should passed?

 Ex:
 1)
 ./run_tempest.sh tempest.api.volume.test_volumes_snapshots : 1or 2 tests
 are getting failed

 But when i am running individual tests in test_volumes_snapshots,all
 tests are getting passed.

 2)
 ./run_tempest.sh
 tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_upload:
 This is also getting failed.



 Regards
 Nikesh

 On Mon, Sep 22, 2014 at 4:12 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
 wrote:

 Hi Nikesh,

  -Original Message-
  From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
  Sent: Saturday, September 20, 2014 9:49 PM
  To: openst...@lists.openstack.org; OpenStack Development Mailing List
 (not for usage questions)
  Subject: Re: [Openstack] No one replying on tempest issue?Please share
 your experience
 
  Still i didnot get any reply.

 Jay has already replied to this mail, please check the nova-compute
 and cinder-volume log as he said[1].

 [1]:
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046147.html

  Now i ran below command

Re: [openstack-dev] [Openstack] No one replying on tempest issue?Please share your experience

2014-09-23 Thread Nikesh Kumar Mahalka
Hi,
I am able to do all volume operations through dashboard and cli commands.
But when i am running tempest tests,some tests are getting failed.
For contributing cinder volume driver for my client in cinder,do all
tempest tests should passed?

Ex:
1)
./run_tempest.sh tempest.api.volume.test_volumes_snapshots : 1or 2 tests
are getting failed

But when i am running individual tests in test_volumes_snapshots,all
tests are getting passed.

2)
./run_tempest.sh
tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_upload:
This is also getting failed.



Regards
Nikesh

On Mon, Sep 22, 2014 at 4:12 PM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
wrote:

 Hi Nikesh,

  -Original Message-
  From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
  Sent: Saturday, September 20, 2014 9:49 PM
  To: openst...@lists.openstack.org; OpenStack Development Mailing List
 (not for usage questions)
  Subject: Re: [Openstack] No one replying on tempest issue?Please share
 your experience
 
  Still i didnot get any reply.

 Jay has already replied to this mail, please check the nova-compute
 and cinder-volume log as he said[1].

 [1]:
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/046147.html

  Now i ran below command:
  ./run_tempest.sh
 tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTest.test_volume_from_snapshot
 
  and i am getting test failed.
 
 
  Actually,after analyzing tempest.log,i found that:
  during creation of a volume from snapshot,tearDownClass is called and it
 is deleting snapshot bfore creation of volume
  and my test is getting failed.

 I guess the failure you mentioned at the above is:

 2014-09-20 00:42:12.519 10684 INFO tempest.common.rest_client
 [req-d4dccdcd-bbfa-4ddf-acd8-5a7dcd5b15db None] Request
 (VolumesSnapshotTest:tearDownClass): 404 GET

 http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/snapshots/71d3cad4-440d-4fbb-8758-76da17b6ace6
 0.029s

 and

 2014-09-20 00:42:22.511 10684 INFO tempest.common.rest_client
 [req-520a54ad-7e0a-44ba-95c0-17f4657bc3b0 None] Request
 (VolumesSnapshotTest:tearDownClass): 404 GET

 http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/volumes/7469271a-d2a7-4ee6-b54a-cd0bf767be6b
 0.034s

 right?
 If so, that is not a problem.
 VolumesSnapshotTest creates two volumes, and the tearDownClass checks these
 volumes deletions by getting volume status until 404(NotFound) [2].

 [2]:
 https://github.com/openstack/tempest/blob/master/tempest/api/volume/base.py#L128

  I deployed a juno devstack setup for a cinder volume driver.
  I changed cinder.conf file and tempest.conf file for single backend and
 restarted cinder services.
  Now i ran tempest test as below:
  /opt/stack/tempest/run_tempest.sh
 tempest.api.volume.test_volumes_snapshots
 
  I am getting below output:
   Traceback (most recent call last):
File
 /opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py, line
 176, in test_volume_from_snapshot
  snapshot = self.create_snapshot(self.volume_origin['id'])
File /opt/stack/tempest/tempest/api/volume/base.py, line 112, in
 create_snapshot
  'available')
File
 /opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, line
 126, in wait_for_snapshot_status
  value = self._get_snapshot_status(snapshot_id)
File
 /opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, line
 99, in _get_snapshot_status
  snapshot_id=snapshot_id)
  SnapshotBuildErrorException: Snapshot
 6b1eb319-33ef-4357-987a-58eb15549520 failed to build and is in
  ERROR status

 What happens if running the same operation as Tempest by hands on your
 environment like the following ?

 [1] $ cinder create 1
 [2] $ cinder snapshot-create id of the created volume at [1]
 [3] $ cinder create --snapshot-id id of the created snapshot at [2] 1
 [4] $ cinder show id of the created volume at [3]

 Please check whether the status of created volume at [3] is available or
 not.

 Thanks
 Ken'ichi Ohmichi

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] No one replying on tempest issue?Please share your experience

2014-09-20 Thread Nikesh Kumar Mahalka
Still i didnot get any reply.

Now i ran below command:
./run_tempest.sh
tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTest.test_volume_from_snapshot

and i am getting test failed.


Actually,after analyzing tempest.log,i found that:
during creation of a volume from snapshot,tearDownClass is called and it is
deleting snapshot bfore creation of volume and my test is getting failed.

I having attached some files


Regards
Nikesh

On Sat, Sep 20, 2014 at 6:12 PM, Nikesh Kumar Mahalka 
nikeshmaha...@vedams.com wrote:

 Still i didnot get any reply.

 Now i ran below command:
 ./run_tempest.sh
 tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTest.test_volume_from_snapshot

 and i am getting test failed.


 Actually,after analyzing tempest.log,i found that:
 during creation of a volume from snapshot,tearDownClass is called and it
 is deleting snapshot bfore creation of volume and my test is getting failed.

 I having attached some files


 Regards
 Nikesh:




 On Tue, Sep 16, 2014 at 8:40 PM, Nikesh Kumar Mahalka 
 nikeshmaha...@vedams.com wrote:

 Hi,
 I deployed a juno devstack setup for a cinder volume driver.
 I changed cinder.conf file and tempest.conf file for single backend and
 restarted cinder services.

 Now i ran tempest test as below:
 /opt/stack/tempest/run_tempest.sh
 tempest.api.volume.test_volumes_snapshots

 I am getting below output:
  Traceback (most recent call last):
   File /opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py,
 line 176, in test_volume_from_snapshot
 snapshot = self.create_snapshot(self.volume_origin['id'])
   File /opt/stack/tempest/tempest/api/volume/base.py, line 112, in
 create_snapshot
 'available')
   File
 /opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, line
 126, in wait_for_snapshot_status
 value = self._get_snapshot_status(snapshot_id)
   File
 /opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, line
 99, in _get_snapshot_status
 snapshot_id=snapshot_id)
 SnapshotBuildErrorException: Snapshot
 6b1eb319-33ef-4357-987a-58eb15549520 failed to build and is in ERROR status


 Ran 14 tests in 712.023s

 FAILED (failures=1)


 Is any one faced such problem?


 Regards
 Nikesh





tempest.conf
Description: Binary data


cinder.conf
Description: Binary data


tempest.log
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] how to deploy juno devstack without multiple backend

2014-09-16 Thread Nikesh Kumar Mahalka
Hi i already tried many things but it was not working.
By default lvm multi-bakends is enabled in juno devstack.

then i went through devstack juno code and

*enabling only single backend:*
i didnot find exact solution so after installing devstack, i am changing
cinder.conf for
single backend and restarting cinder services.

*enabling multiple backends:*
i am adding this extra line in [local|localrc]] of local.conf
*CINDER_ENABLED_BACKENDS=hp_msa:hp_msa_driver,lvm:lvmdriver-1*

and outside [local|localrc]] of local.conf

[[post-config|$CINDER_CONF]]
[hp_msa_driver]
volume_driver = cinder.volume.drivers.san.hp.hp_msa_fc.HPMSAFCDriver
san_ip = 192.168.2.192
san_login = demo
san_password =!demo
volume_backend_name=HPMSA_FC

[lvmdriver-1]
volume_group=stack-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI

and again i am seeing cinder.conf file and if some thing is extra, i am
removing and restarting cinder services


Regards
Nikesh


On Tue, Sep 16, 2014 at 1:44 PM, Swapnil Kulkarni cools...@gmail.com
wrote:



 On Tue, Sep 16, 2014 at 12:31 PM, Manickam, Kanagaraj 
 kanagaraj.manic...@hp.com wrote:

  HP MSA is supported by cinder and use the following guidelines:


 http://docs.openstack.org/trunk/config-reference/content/hp-msa-driver.html



 you could install devstack and follow the above wiki or update the above
 defined HP MSA param as suggested by devstack at
 http://devstack.org/configuration.html



 [[post-config|$NOVA_CONF]]

 I think it would be  [[post-config|$CINDER_CONF]]

  HERE ADD THE HP MSA CONFIG DETAILS.





 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] No one replying on tempest issue?Please share your experience

2014-09-16 Thread Nikesh Kumar Mahalka
Hi,
I deployed a juno devstack setup for a cinder volume driver.
I changed cinder.conf file and tempest.conf file for single backend and
restarted cinder services.

Now i ran tempest test as below:
/opt/stack/tempest/run_tempest.sh tempest.api.volume.test_volumes_snapshots

I am getting below output:
 Traceback (most recent call last):
  File /opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py,
line 176, in test_volume_from_snapshot
snapshot = self.create_snapshot(self.volume_origin['id'])
  File /opt/stack/tempest/tempest/api/volume/base.py, line 112, in
create_snapshot
'available')
  File
/opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, line
126, in wait_for_snapshot_status
value = self._get_snapshot_status(snapshot_id)
  File
/opt/stack/tempest/tempest/services/volume/json/snapshots_client.py, line
99, in _get_snapshot_status
snapshot_id=snapshot_id)
SnapshotBuildErrorException: Snapshot 6b1eb319-33ef-4357-987a-58eb15549520
failed to build and is in ERROR status


Ran 14 tests in 712.023s

FAILED (failures=1)


Is any one faced such problem?


Regards
Nikesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tempest error

2014-09-15 Thread Nikesh Kumar Mahalka
Hi I deployed a Icehouse devstack on ubuntu 14.04.
When i am running tempest test on volume,i am getting errors.
 I also attached my cinder.conf and tempest.conf file.

I am running tempest tests by below command:
./run_tempest.sh tempest.api.volume

*Below is error:*

Traceback (most recent call last):
  File /opt/stack/tempest/tempest/test.py, line 128, in wrapper
return f(self, *func_args, **func_kwargs)
  File /opt/stack/tempest/tempest/api/volume/test_volumes_actions.py,
line 105, in test_volume_upload
self.image_client.wait_for_image_status(image_id, 'active')
  File /opt/stack/tempest/tempest/services/image/v1/json/image_client.py,
line 304, in wait_for_image_status
raise exceptions.TimeoutException(message)
TimeoutException: Request timed out
Details: (VolumesV2ActionsTestXML:test_volume_upload) Time Limit Exceeded!
(196s)while waiting for active, but we got saving.

Ran 248 tests in 2671.199s

FAILED (failures=26)



Regrads
Nikesh


tempest.conf
Description: Binary data


cinder.conf
Description: Binary data
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tempest test error

2014-09-12 Thread Nikesh Kumar Mahalka
Hi,
I deployed a juno devstack on ubuntu 14.04 server.

Below are the services,which are running fine
$ cinder service-list
+--+-+--+-+---++-+
|  Binary  | Host| Zone |
Status | State | Updated_at | Disabled Reason |
+--+-+--+-+---++-+
| cinder-scheduler | juno-devstack-server| nova |
enabled |   up  | 2014-09-12T19:07:00.00 |   None  |
|  cinder-volume   | juno-devstack-server@dothill_driver | nova |
enabled |   up  | 2014-09-12T19:06:55.00 |   None  |
+--+-+--+-+---++-+


nova service-list
++--+--+--+-+---++-+
| Id | Binary   | Host | Zone | Status  |
State | Updated_at | Disabled Reason |
++--+--+--+-+---++-+
| 1  | nova-conductor   | juno-devstack-server | internal | enabled |
up| 2014-09-12T19:08:06.00 | -   |
| 2  | nova-cert| juno-devstack-server | internal | enabled |
up| 2014-09-12T19:08:08.00 | -   |
| 3  | nova-network | juno-devstack-server | internal | enabled |
up| 2014-09-12T19:08:08.00 | -   |
| 4  | nova-compute | juno-devstack-server | nova | enabled |
up| 2014-09-12T19:07:59.00 | -   |
| 5  | nova-scheduler   | juno-devstack-server | internal | enabled |
up| 2014-09-12T19:08:05.00 | -   |
| 6  | nova-consoleauth | juno-devstack-server | internal | enabled |
up| 2014-09-12T19:08:00.00 | -   |
++--+--+--+-+---++-+

But when I am running tempest test and i am getting below error:

Ran 1744 tests in 3133.358s

FAILED (failures=105)
vedams@juno-devstack-server:~/devstack$
tempest.api.compute.volumes.test_volumes_get.VolumesGetTestXML
tempest.api.compute.volumes.test_volumes_get.VolumesGetTestXML:
command not found
vedams@juno-devstack-server:~/devstack$
test_volume_create_get_delete[gate,smoke] FAIL
test_volume_create_get_delete[gate,smoke]: command not found
vedams@juno-devstack-server:~/devstack$ setUpClass
(tempest.api.compute.volumes.test_volumes_list
-bash: syntax error near unexpected token
`tempest.api.compute.volumes.test_volumes_list'
vedams@juno-devstack-server:~/devstack$ VolumesTestXML)
   FAIL
-bash: syntax error near unexpected token `)'



Regards
Nikesh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Launch of a instance failed in juno

2014-08-30 Thread Nikesh Kumar Mahalka
Launch of a instance failed in juno devstack on ubuntu server 14.04 virtual
machine.
I am getting error Host not found.
Below is part of /opt/stack/logs/screen/screen-n-cond.log
2014-08-30 12:06:51.721 ERROR nova.scheduler.utils
[req-744ba1cf-7433-46b4-9771-9600a87e8c28 admin admin] [instance:
2a679ed7-2f60-493a-a6cf-d937f11f442b] Error from last host:
juno-devstack-server (node juno-devstack-server): [u'Traceback (most recent
call last):\n', u'  File /opt/stack/nova/nova/compute/manager.py, line
1932, in do_build_and_run_instance\nfilter_properties)\n', u'  File
/opt/stack/nova/nova/compute/manager.py, line 2061, in
_build_and_run_instance\ninstance_uuid=instance.uuid,
reason=six.text_type(e))\n', u'RescheduledException: Build of instance
2a679ed7-2f60-493a-a6cf-d937f11f442b was re-scheduled: not all arguments
converted during string formatting\n']
2014-08-30 12:06:51.724 INFO oslo.messaging._drivers.impl_rabbit
[req-744ba1cf-7433-46b4-9771-9600a87e8c28 admin admin] Connecting to AMQP
server on 192.168.2.153:5672
2014-08-30 12:06:51.736 INFO oslo.messaging._drivers.impl_rabbit
[req-744ba1cf-7433-46b4-9771-9600a87e8c28 admin admin] Connected to AMQP
server on 192.168.2.153:5672
2014-08-30 12:06:51.763 WARNING nova.scheduler.driver
[req-744ba1cf-7433-46b4-9771-9600a87e8c28 admin admin] [instance:
2a679ed7-2f60-493a-a6cf-d937f11f442b] NoValidHost exception with message:
'No valid host was found.'
2014-08-30 12:06:51.763 WARNING nova.scheduler.driver
[req-744ba1cf-7433-46b4-9771-9600a87e8c28 admin admin] [instance:
2a679ed7-2f60-493a-a6cf-d937f11f442b] Setting instance to ERROR state.


Earlier also i mailed and i got reply Compute node do not support  QEMU
hypervisor from Juno. So, you should not deploy a compute node on  VM

Is  there any link in support of this answer?


Also some other observation is below:

Before ./stack.sh,contents of hosts file is:
vi /etc/hosts
127.0.0.1   localhost
192.168.2.153   juno-devstack-server
#127.0.1.1  juno-devstack-server

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

After ./stack.sh,contents of host file is:

127.0.0.1   localhost  *juno-devstack-server*
192.168.2.153   juno-devstack-server
#127.0.1.1  juno-devstack-server

# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters


Regards
Nikesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Launch of a instance failed

2014-08-28 Thread Nikesh Kumar Mahalka
Hi i am deploying a devstack juno on ubuntu 14.04 server virtual machine.
After installation,when i am trying to launch a instance,its failed.
I am getting host not found error.
Below is part of /opt/stack/logs/screen/screen-n-cond.log


Below is ther error
2014-08-28 23:44:59.448 ERROR nova.scheduler.utils
[req-6f220296-8ec2-4e49-821d-0d69d3acc315 admin admin] [instance:
7f105394-414c-4458-b1a1-6f37d6cff87a] Error from last host:
juno-devstack-server (node juno-devstack-server): [u'Traceback (most recent
call last):\n', u'  File /opt/stack/nova/nova/compute/manager.py, line
1932, in do_build_and_run_instance\nfilter_properties)\n', u'  File
/opt/stack/nova/nova/compute/manager.py, line 2067, in
_build_and_run_instance\ninstance_uuid=instance.uuid,
reason=six.text_type(e))\n', u'RescheduledException: Build of instance
7f105394-414c-4458-b1a1-6f37d6cff87a was re-scheduled: not all arguments
converted during string formatting\n']







Regards
Nikesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] devstack local.conf file

2014-08-11 Thread Nikesh Kumar Mahalka
Hi,
I have gone through devstack links.
They are not clear like openstack.org documents.


For Example:
when i am using below local.conf file in devstack,hp_msa_driver is not
coming in enabled_backends in cinder.conf after running stack.sh.

[[local|localrc]]
ADMIN_PASSWORD=vedams123
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
FLAT_INTERFACE=eth0
FIXED_RANGE=192.168.2.170/29
HOST_IP=192.168.2.151
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
TEMPEST_VOLUME_DRIVER=hp_msa_fc
TEMPEST_VOLUME_VENDOR=Hewlett-Packard
TEMPEST_STORAGE_PROTOCOL=FC


[[post-config|$CINDER_CONF]]
[hp_msa_driver]
volume_driver = cinder.volume.drivers.san.hp.hp_msa_fc.HPMSAFCDriver
san_ip = 192.168.2.192
san_login = manage
san_password =!manage
volume_backend_name=HPMSA_FC


[lvmdriver-1]
volume_group=stack-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI



*I am getting below cinder.conf file after running stack.sh script*

[keystone_authtoken]
auth_uri = http://192.168.2.151:5000/v2.0
signing_dir = /var/cache/cinder
admin_password = vedams123
admin_user = cinder
admin_tenant_name = service
cafile =
identity_uri = http://192.168.2.151:35357

[DEFAULT]
rabbit_password = vedams123
rabbit_hosts = 192.168.2.151
rpc_backend = cinder.openstack.common.rpc.impl_kombu
use_syslog = True
*default_volume_type = lvmdriver-1*
*enabled_backends = lvmdriver-1*
enable_v1_api = true
periodic_interval = 60
lock_path = /opt/stack/data/cinder
state_path = /opt/stack/data/cinder
osapi_volume_extension = cinder.api.contrib.standard_extensions
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
sql_connection = mysql://root:vedams123@127.0.0.1/cinder?charset=utf8
iscsi_helper = tgtadm
my_ip = 192.168.2.151
verbose = True
debug = True
auth_strategy = keystone

[lvmdriver-1]
volume_group = stack-volumes-1
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name = LVM_iSCSI

[hp_msa_driver]
volume_backend_name = HPMSA_FC
san_password = !manage
san_login = manage
san_ip = 192.168.2.192
volume_driver = cinder.volume.drivers.san.hp.hp_msa_fc.HPMSAFCDriver



*Then i analyzed source code of stack.sh,and added in local.conf this line:*
*CINDER_ENABLED_BACKENDS=hp_msa:hp_msa_driver,lvm:lvmdriver-1*


Now i am getting hp_msa_fc in cinder.conf in enabled_backends



Regards
Nikesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tempest bug

2014-08-09 Thread Nikesh Kumar Mahalka
I have reported a bug as tempest volume-type test failed for hp_msa_fc
driver in tempest
project.
Bug Id is Bug #1353850
My Tempest tests are failed on cinder driver.



No one till responded to my bug.
I am new in this area.
Please help me to solve this.



Regards
Nikesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to run 'tempest-dsvm-full' locally

2014-08-02 Thread Nikesh Kumar Mahalka
I want to run 'tempest-dsvm-full' on my local devstack environment as
mentioned in below link:

https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver


Currently i have not proposed a blue-print for cinder driver and my company
have not signed CLA.
So i want to first run  'tempest-dsvm-full' locally.


How can i run this locally without signing any CLA?



Regards
Nikesh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] how and which tempest tests to run

2014-08-01 Thread Nikesh Kumar Mahalka
I deployed a single node devstack on Ubuntu 14.04.
This devstack belongs to Juno.
I have written a cinder-volume driver for my client backend.
I want to contribute this driver in Juno release.
As i analyzed the contribution process,it is telling to run tempest tests
for Continuous Integration.

Could any one tell me how and which tempest tests to run on this devstack
deployment for cinder volume driver?
Also tempest has many test cases.Do i have to pass all tests for
contribution of my driver?

Also am i missing any thing thing in below local.conf?

*Below are steps for my devstack deployment:*

1) git clone https://github.com/openstack-dev/devstack.git
2)cd devstack
3)vi local.conf

[[local|localrc]]

ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
FLAT_INTERFACE=eth0
FIXED_RANGE=192.168.2.80/29
#FLOATING_RANGE=192.168.20.0/25
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver
TEMPEST_VOLUME_DRIVER=client_iscsi
TEMPEST_VOLUME_VENDOR=CLIENT
TEMPEST_STORAGE_PROTOCOL=iSCSI
VOLUME_BACKING_FILE_SIZE=20G

[[post-config|$CINDER_CONF]]

[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.
client_iscsi.ClientISCSIDriver
san_ip = 192.168.2.192
san_login = some_name
san_password =some_password
client_iscsi_ips = 192.168.2.193

4)./stack.sh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Cinder tempest api volume tests failed

2014-08-01 Thread Nikesh Kumar Mahalka
Hi Mike,test which is failed for me is:
*tempest.api.volume.admin.test_volume_types.VolumeTypesTest*

I am getting error in below function call in above test
 *self.volumes_client.wait_for_volume_status**(volume['id'],*
* 'available')**.*

This function call is in below function:

*@test.attr(type='smoke')*
*def
test_create_get_delete_volume_with_volume_type_and_extra_specs(self)*


I saw in c-sch log and i found this major issue:
*2014-08-01 14:08:05.773 11853 ERROR cinder.scheduler.flows.create_volume
[req-ceafd00c-30b1-4846-a555-6116556efb3b 43af88811b2243238d3d9fc732731565
a39922e8e5284729b07fcd045cfd5a88 - - -] Failed to run task
cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create:
No valid host was found. No weighed hosts available*

Actually by analyzing the test i found:
1)it is creating a volume-type with extra_specs
2)it is creating a volume with volume type and here it is failing.


*Below is my new local.conf file. *
*Am i missing anything in this?*

[[local|localrc]]
ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
FLAT_INTERFACE=eth0
FIXED_RANGE=192.168.2.80/29
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver
TEMPEST_VOLUME_DRIVER=client_iscsi
TEMPEST_VOLUME_VENDOR=CLIENT
TEMPEST_STORAGE_PROTOCOL=iSCSI
VOLUME_BACKING_FILE_SIZE=20G

[[post-config|$CINDER_CONF]]
[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip=192.168.2.192
san_login=some_name
san_password=some_password
client_iscsi_ips = 192.168.2.193


*Below is my cinder.conf:*
[keystone_authtoken]
auth_uri = http://192.168.2.64:5000/v2.0
signing_dir = /var/cache/cinder
admin_password = some_password
admin_user = cinder
admin_tenant_name = service
cafile =
identity_uri = http://192.168.2.64:35357

[DEFAULT]
rabbit_password = some_password
rabbit_hosts = 192.168.2.64
rpc_backend = cinder.openstack.common.rpc.impl_kombu
use_syslog = True
default_volume_type = client_driver
enabled_backends = client_driver
enable_v1_api = true
periodic_interval = 60
lock_path = /opt/stack/data/cinder
state_path = /opt/stack/data/cinder
osapi_volume_extension = cinder.api.contrib.standard_extensions
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
sql_connection = mysql://root:some_password@127.0.0.1/cinder?charset=utf8
iscsi_helper = tgtadm
my_ip = 192.168.2.64
verbose = True
debug = True
auth_strategy = keystone

[client_driver]
client_iscsi_ips = 192.168.2.193
san_password = !manage
san_login = manage
san_ip = 192.168.2.192
volume_driver =
cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver



Regards
Nikesh









On Fri, Aug 1, 2014 at 1:56 AM, Mike Perez thin...@gmail.com wrote:

 On 11:30 Thu 31 Jul , Nikesh Kumar Mahalka wrote:
  I deployed a single node devstack on Ubuntu 14.04.
  This devstack belongs to Juno.
 
  When i am running tempest api volume test, i am getting some tests
 failed.

 Hi Nikesh,

 To further figure out what's wrong, take a look at the c-vol, c-api and
 c-sch
 tabs in the stack screen session. If you're unsure where to go from there
 after
 looking at the output, set the `SCREEN_LOGDIR` setting in your local.conf
 [1]
 and copy the logs from those tabs to paste.openstack.org for us to see.

 [1] - http://devstack.org/configuration.html

 --
 Mike Perez

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder tempest api volume tests failed

2014-07-31 Thread Nikesh Kumar Mahalka
I deployed a single node devstack on Ubuntu 14.04.
This devstack belongs to Juno.

When i am running tempest api volume test, i am getting some tests failed.

*Below are steps for devstack deployment:*

1) git clone https://github.com/openstack-dev/devstack.git
2)cd devstack
3)vi local.conf

[[local|localrc]]

ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
#FLAT_INTERFACE = eth0
FIXED_RANGE=192.168.2.80/29
#FLOATING_RANGE=192.168.20.0/25
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver

[[post-config|$CINDER_CONF]]

[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip = 192.168.2.192
san_login = some_name
san_password =some_password
client_iscsi_ips = 192.168.2.193

4)./stack.sh

*Below is step and portion of failed test :*
cd /opt/stack/tempest
./run_tempest.sh tempest.api.volume

Traceback (most recent call last):
  File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
line 157, in test_volume_create_get_update_delete_as_clone
origin = self.create_volume()
  File /opt/stack/tempest/tempest/api/volume/base.py, line 103, in
create_volume
cls.volumes_client.wait_for_volume_status(volume['id'], 'available')
  File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py,
line 162, in wait_for_volume_status
raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
VolumeBuildErrorException: Volume 4c195bdd-5fea-4da5-884e-69a2026d9ca0
failed to build and is in ERROR status


==
FAIL:
tempest.api.volume.test_volumes_get.VolumesV1GetTest.test_volume_create_get_update_delete_from_image[gate,image,smoke]
--
Traceback (most recent call last):
_StringException: Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2014-07-30 18:42:49,462 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
200 POST
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes
0.300s
2014-07-30 18:42:49,545 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
200 GET
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.082s
2014-07-30 18:42:50,626 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
200 GET
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.079s
2014-07-30 18:42:50,698 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:_run_cleanups): 202 DELETE
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.069s
2014-07-30 18:42:50,734 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:_run_cleanups): 404 GET
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.035s
}}}

Traceback (most recent call last):
  File /opt/stack/tempest/tempest/test.py, line 128, in wrapper
return f(self, *func_args, **func_kwargs)
  File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
line 153, in test_volume_create_get_update_delete_from_image
self._volume_create_get_update_delete(imageRef=CONF.compute.image_ref)
  File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
line 63, in _volume_create_get_update_delete
self.client.wait_for_volume_status(volume['id'], 'available')
  File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py,
line 162, in wait_for_volume_status
raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
VolumeBuildErrorException: Volume 6e6585a9-6f7b-42c0-b099-ec72c13a4040
failed to build and is in ERROR status


==
FAIL: setUpClass
(tempest.api.volume.test_volumes_list.VolumesV1ListTestJSON)
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File /opt/stack/tempest/tempest/test.py, line 76, in decorator
f(cls)
  File /opt/stack/tempest/tempest/api/volume/test_volumes_list.py,
line 68, in setUpClass
volume = cls.create_volume(metadata=cls.metadata)
  File /opt/stack/tempest/tempest/api/volume/base.py, line 103, in
create_volume
cls.volumes_client.wait_for_volume_status(volume['id'], 'available')
  File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py,
line 162, in wait_for_volume_status
raise 

[openstack-dev] tempest api volume test failed

2014-07-30 Thread Nikesh Kumar Mahalka
I deployed a single node devstack on Ubuntu 14.04.
This devstack belongs to Juno.

When i am running tempest api volume test, i am getting some tests failed.

Below are steps for devstack deployment:
1) git clone https://github.com/openstack-dev/devstack.git
2)cd devstack
3)vi local.conf

[[local|localrc]]

ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
#FLAT_INTERFACE = eth0
FIXED_RANGE=192.168.2.80/29
#FLOATING_RANGE=192.168.20.0/25
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver

[[post-config|$CINDER_CONF]]

[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip = 192.168.2.192
san_login = some_name
san_password =some_password
client_iscsi_ips = 192.168.2.193

4)./stack.sh


Now,I am running tempest test
cd /opt/stack/tempest
./run_tempest.sh tempest.api.volume


Below is portion of failed test :
Traceback (most recent call last):
  File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
line 157, in test_volume_create_get_update_delete_as_clone
origin = self.create_volume()
  File /opt/stack/tempest/tempest/api/volume/base.py, line 103, in
create_volume
cls.volumes_client.wait_for_volume_status(volume['id'], 'available')
  File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py,
line 162, in wait_for_volume_status
raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
VolumeBuildErrorException: Volume 4c195bdd-5fea-4da5-884e-69a2026d9ca0
failed to build and is in ERROR status


==
FAIL: 
tempest.api.volume.test_volumes_get.VolumesV1GetTest.test_volume_create_get_update_delete_from_image[gate,image,smoke]
--
Traceback (most recent call last):
_StringException: Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2014-07-30 18:42:49,462 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
200 POST http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes
0.300s
2014-07-30 18:42:49,545 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
200 GET 
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.082s
2014-07-30 18:42:50,626 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:test_volume_create_get_update_delete_from_image):
200 GET 
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.079s
2014-07-30 18:42:50,698 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:_run_cleanups): 202 DELETE
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.069s
2014-07-30 18:42:50,734 16328 INFO [tempest.common.rest_client]
Request (VolumesV1GetTest:_run_cleanups): 404 GET
http://192.168.2.64:8776/v1/f0cd225e70f249b3a2e40daafb5e34bd/volumes/6e6585a9-6f7b-42c0-b099-ec72c13a4040
0.035s
}}}

Traceback (most recent call last):
  File /opt/stack/tempest/tempest/test.py, line 128, in wrapper
return f(self, *func_args, **func_kwargs)
  File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
line 153, in test_volume_create_get_update_delete_from_image
self._volume_create_get_update_delete(imageRef=CONF.compute.image_ref)
  File /opt/stack/tempest/tempest/api/volume/test_volumes_get.py,
line 63, in _volume_create_get_update_delete
self.client.wait_for_volume_status(volume['id'], 'available')
  File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py,
line 162, in wait_for_volume_status
raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
VolumeBuildErrorException: Volume 6e6585a9-6f7b-42c0-b099-ec72c13a4040
failed to build and is in ERROR status


==
FAIL: setUpClass (tempest.api.volume.test_volumes_list.VolumesV1ListTestJSON)
--
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File /opt/stack/tempest/tempest/test.py, line 76, in decorator
f(cls)
  File /opt/stack/tempest/tempest/api/volume/test_volumes_list.py,
line 68, in setUpClass
volume = cls.create_volume(metadata=cls.metadata)
  File /opt/stack/tempest/tempest/api/volume/base.py, line 103, in
create_volume
cls.volumes_client.wait_for_volume_status(volume['id'], 'available')
  File /opt/stack/tempest/tempest/services/volume/json/volumes_client.py,
line 162, in wait_for_volume_status
raise 

[openstack-dev] tempest api volume test failed

2014-07-29 Thread Nikesh Kumar Mahalka
I deployed a single node devstack on Ubuntu 14.04.
This devstack belongs to Juno.

1) git clone https://github.com/openstack-dev/devstack.git
2)cd devstack
3)vi local.conf

[[local|localrc]]

ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
#FLAT_INTERFACE = eth0
FIXED_RANGE=192.168.2.80/29
#FLOATING_RANGE=192.168.20.0/25
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver

[[post-config|$CINDER_CONF]]

[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip = 192.168.2.192
san_login = some_name
san_password =some_password
client_iscsi_ips = 192.168.2.193

4)./stack.sh

5)
I am running below test:
cd /opt/stack/tempest
./run_tempest.sh tempest.api.volume


But some tests are failed.Manually i am able to perform all volume operations.
Can any one tell where am i wrong?
Below is portion of failed test :

==
FAIL: 
tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTestXML.test_volume_from_snapshot[gate]
--
Traceback (most recent call last):
_StringException: Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2014-07-28 12:01:41,514 3278 INFO [tempest.common.rest_client]
Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 POST
http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots
0.117s
2014-07-28 12:01:41,569 3278 INFO [tempest.common.rest_client]
Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 GET
http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots/20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7
0.054s
2014-07-28 12:01:43,621 3278 INFO [tempest.common.rest_client]
Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 GET
http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots/20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7
0.049s
}}}

Traceback (most recent call last):
  File /opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py,
line 181, in test_volume_from_snapshot
snapshot = self.create_snapshot(self.volume_origin['id'])
  File /opt/stack/tempest/tempest/api/volume/base.py, line 106, in
create_snapshot
'available')
  File /opt/stack/tempest/tempest/services/volume/xml/snapshots_client.py,
line 136, in wait_for_snapshot_status
value = self._get_snapshot_status(snapshot_id)
  File /opt/stack/tempest/tempest/services/volume/xml/snapshots_client.py,
line 109, in _get_snapshot_status
snapshot_id=snapshot_id)
SnapshotBuildErrorException: Snapshot
20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7 failed to build and is in ERROR
status


Ran 246 tests in 4149.523s

FAILED (failures=10)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] volume driver submission deadline for juno

2014-07-28 Thread Nikesh Kumar Mahalka
I want to write a cinder volume driver for my client and contribute for
Openstack Juno release. Till now i did not submit any blueprint for cinder
volume driver. Is there any deadline for submitting blueprint for cinder
volume driver? Is there any other deadline for submitting code and tests
after registering blueprint?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev