Re: [openstack-dev] [Openstack] No one replying on tempest issue?Please share your experience

2014-09-29 Thread GHANSHYAM MANN
Its present as you mentioned. you can look screen-n-cpu.*.log. All running
services logs files will be @ /opt/stack/logs/screen/, which you can
analyze and find where issue is.

Such query can be asked on IRC ( https://wiki.openstack.org/wiki/IRC) for
quick reply instead of waiting on mail.

For further discussion on mail please change the subject now :).

On Mon, Sep 29, 2014 at 7:51 PM, Nikesh Kumar Mahalka <
nikeshmaha...@vedams.com> wrote:

> How to get nova-compute logs in juno devstack?
> Below are nova services:
> vedams@vedams-compute-fc:/opt/stack/tempest$ ps -aef | grep nova
> vedams   15065 14812  0 10:56 pts/10   00:00:52 /usr/bin/python
> /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
> vedams   15077 14811  0 10:56 pts/900:02:06 /usr/bin/python
> /usr/local/bin/nova-api
> vedams   15086 14818  0 10:56 pts/12   00:00:09 /usr/bin/python
> /usr/local/bin/nova-cert --config-file /etc/nova/nova.conf
> vedams   15095 14836  0 10:56 pts/17   00:00:09 /usr/bin/python
> /usr/local/bin/nova-consoleauth --config-file /etc/nova/nova.conf
> vedams   15096 14821  0 10:56 pts/13   00:00:09 /usr/bin/python
> /usr/local/bin/nova-network --config-file /etc/nova/nova.conf
> vedams   15100 14844  0 10:56 pts/18   00:00:00 /usr/bin/python
> /usr/local/bin/nova-objectstore --config-file /etc/nova/nova.conf
> vedams   15101 14826  0 10:56 pts/15   00:00:05 /usr/bin/python
> /usr/local/bin/nova-novncproxy --config-file /etc/nova/nova.conf --web
> /opt/stack/noVNC
> vedams   15103 14814  0 10:56 pts/11   00:02:02 /usr/bin/python
> /usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
> vedams   15104 14823  0 10:56 pts/14   00:00:11 /usr/bin/python
> /usr/local/bin/nova-scheduler --config-file /etc/nova/nova.conf
> vedams   15117 14831  0 10:56 pts/16   00:00:00 /usr/bin/python
> /usr/local/bin/nova-xvpvncproxy --config-file /etc/nova/nova.conf
> vedams   15195 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
> /usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
> vedams   15196 15103  0 10:56 pts/11   00:00:25 /usr/bin/python
> /usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
> vedams   15197 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
> /usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
> vedams   15198 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
> /usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
> vedams   15208 15077  0 10:56 pts/900:00:00 /usr/bin/python
> /usr/local/bin/nova-api
> vedams   15209 15077  0 10:56 pts/900:00:00 /usr/bin/python
> /usr/local/bin/nova-api
> vedams   15238 15077  0 10:56 pts/900:00:03 /usr/bin/python
> /usr/local/bin/nova-api
> vedams   15239 15077  0 10:56 pts/900:00:01 /usr/bin/python
> /usr/local/bin/nova-api
> vedams   15240 15077  0 10:56 pts/900:00:03 /usr/bin/python
> /usr/local/bin/nova-api
> vedams   15241 15077  0 10:56 pts/900:00:03 /usr/bin/python
> /usr/local/bin/nova-api
> vedams   15248 15077  0 10:56 pts/900:00:00 /usr/bin/python
> /usr/local/bin/nova-api
> vedams   15249 15077  0 10:56 pts/900:00:00 /usr/bin/python
> /usr/local/bin/nova-api
> vedams   21850 14712  0 16:16 pts/000:00:00 grep --color=auto nova
>
>
> Below are nova logs files:
> vedams@vedams-compute-fc:/opt/stack/tempest$ ls
> /opt/stack/logs/screen/screen-n-
> screen-n-api.2014-09-28-101810.log
> screen-n-cond.log screen-n-net.2014-09-28-101810.log
> screen-n-obj.log
> screen-n-api.log
> screen-n-cpu.2014-09-28-101810.logscreen-n-net.log
> screen-n-sch.2014-09-28-101810.log
> screen-n-cauth.2014-09-28-101810.log
> screen-n-cpu.log  screen-n-novnc.2014-09-28-101810.log
> screen-n-sch.log
> screen-n-cauth.log
> screen-n-crt.2014-09-28-101810.logscreen-n-novnc.log
> screen-n-xvnc.2014-09-28-101810.log
> screen-n-cond.2014-09-28-101810.log
> screen-n-crt.log  screen-n-obj.2014-09-28-101810.log
> screen-n-xvnc.log
>
>
> Below  are nova screen-seesions:
> 6-$(L) n-api  7$(L) n-cpu  8$(L) n-cond  9$(L) n-crt  10$(L) n-net  11$(L)
> n-sch  12$(L) n-novnc  13$(L) n-xvnc  14$(L) n-cauth  15$(L) n-obj
>
>
>
>
> Regards
> Nikesh
>
>
> On Tue, Sep 23, 2014 at 3:10 PM, Nikesh Kumar Mahalka <
> nikeshmaha...@vedams.com> wrote:
>
>> Hi,
>> I am able to do all volume operations through dashboard and cli commands.
>> But when i am running tempest tests,some tests are getting failed.
>> For contributing cinder volume driver for my client in cinder,do all
>> tempest tests should passed?
>>
>> Ex:
>> 1)
>> ./run_tempest.sh tempest.api.volume.test_volumes_snapshots : 1or 2 tests
>> are getting failed
>>
>> But when i am running individual tests in "test_volumes_snapshots",all
>> tests are getting passed.
>>
>> 2)
>> ./run_tempest.sh
>> tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_upload:
>> This is also getting failed.
>>
>>
>>
>> Regards
>> Nikesh
>>
>> On Mon, Sep 22, 2014 at 4:12 PM, Ken'ichi Ohmichi

Re: [openstack-dev] [Openstack] No one replying on tempest issue?Please share your experience

2014-09-29 Thread Nikesh Kumar Mahalka
How to get nova-compute logs in juno devstack?
Below are nova services:
vedams@vedams-compute-fc:/opt/stack/tempest$ ps -aef | grep nova
vedams   15065 14812  0 10:56 pts/10   00:00:52 /usr/bin/python
/usr/local/bin/nova-compute --config-file /etc/nova/nova.conf
vedams   15077 14811  0 10:56 pts/900:02:06 /usr/bin/python
/usr/local/bin/nova-api
vedams   15086 14818  0 10:56 pts/12   00:00:09 /usr/bin/python
/usr/local/bin/nova-cert --config-file /etc/nova/nova.conf
vedams   15095 14836  0 10:56 pts/17   00:00:09 /usr/bin/python
/usr/local/bin/nova-consoleauth --config-file /etc/nova/nova.conf
vedams   15096 14821  0 10:56 pts/13   00:00:09 /usr/bin/python
/usr/local/bin/nova-network --config-file /etc/nova/nova.conf
vedams   15100 14844  0 10:56 pts/18   00:00:00 /usr/bin/python
/usr/local/bin/nova-objectstore --config-file /etc/nova/nova.conf
vedams   15101 14826  0 10:56 pts/15   00:00:05 /usr/bin/python
/usr/local/bin/nova-novncproxy --config-file /etc/nova/nova.conf --web
/opt/stack/noVNC
vedams   15103 14814  0 10:56 pts/11   00:02:02 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15104 14823  0 10:56 pts/14   00:00:11 /usr/bin/python
/usr/local/bin/nova-scheduler --config-file /etc/nova/nova.conf
vedams   15117 14831  0 10:56 pts/16   00:00:00 /usr/bin/python
/usr/local/bin/nova-xvpvncproxy --config-file /etc/nova/nova.conf
vedams   15195 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15196 15103  0 10:56 pts/11   00:00:25 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15197 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15198 15103  0 10:56 pts/11   00:00:24 /usr/bin/python
/usr/local/bin/nova-conductor --config-file /etc/nova/nova.conf
vedams   15208 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   15209 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   15238 15077  0 10:56 pts/900:00:03 /usr/bin/python
/usr/local/bin/nova-api
vedams   15239 15077  0 10:56 pts/900:00:01 /usr/bin/python
/usr/local/bin/nova-api
vedams   15240 15077  0 10:56 pts/900:00:03 /usr/bin/python
/usr/local/bin/nova-api
vedams   15241 15077  0 10:56 pts/900:00:03 /usr/bin/python
/usr/local/bin/nova-api
vedams   15248 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   15249 15077  0 10:56 pts/900:00:00 /usr/bin/python
/usr/local/bin/nova-api
vedams   21850 14712  0 16:16 pts/000:00:00 grep --color=auto nova


Below are nova logs files:
vedams@vedams-compute-fc:/opt/stack/tempest$ ls
/opt/stack/logs/screen/screen-n-
screen-n-api.2014-09-28-101810.logscreen-n-cond.log
screen-n-net.2014-09-28-101810.logscreen-n-obj.log
screen-n-api.log  screen-n-cpu.2014-09-28-101810.log
screen-n-net.log  screen-n-sch.2014-09-28-101810.log
screen-n-cauth.2014-09-28-101810.log  screen-n-cpu.log
screen-n-novnc.2014-09-28-101810.log  screen-n-sch.log
screen-n-cauth.logscreen-n-crt.2014-09-28-101810.log
screen-n-novnc.logscreen-n-xvnc.2014-09-28-101810.log
screen-n-cond.2014-09-28-101810.log   screen-n-crt.log
screen-n-obj.2014-09-28-101810.logscreen-n-xvnc.log


Below  are nova screen-seesions:
6-$(L) n-api  7$(L) n-cpu  8$(L) n-cond  9$(L) n-crt  10$(L) n-net  11$(L)
n-sch  12$(L) n-novnc  13$(L) n-xvnc  14$(L) n-cauth  15$(L) n-obj




Regards
Nikesh


On Tue, Sep 23, 2014 at 3:10 PM, Nikesh Kumar Mahalka <
nikeshmaha...@vedams.com> wrote:

> Hi,
> I am able to do all volume operations through dashboard and cli commands.
> But when i am running tempest tests,some tests are getting failed.
> For contributing cinder volume driver for my client in cinder,do all
> tempest tests should passed?
>
> Ex:
> 1)
> ./run_tempest.sh tempest.api.volume.test_volumes_snapshots : 1or 2 tests
> are getting failed
>
> But when i am running individual tests in "test_volumes_snapshots",all
> tests are getting passed.
>
> 2)
> ./run_tempest.sh
> tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_upload:
> This is also getting failed.
>
>
>
> Regards
> Nikesh
>
> On Mon, Sep 22, 2014 at 4:12 PM, Ken'ichi Ohmichi 
> wrote:
>
>> Hi Nikesh,
>>
>> > -Original Message-
>> > From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
>> > Sent: Saturday, September 20, 2014 9:49 PM
>> > To: openst...@lists.openstack.org; OpenStack Development Mailing List
>> (not for usage questions)
>> > Subject: Re: [Openstack] No one replying on tempest issue?Please share
>> your experience
>> >
>> > Still i didnot get any reply.
>>
>> Jay has already replied to this mail, please check the nova-compute
>> and cinder-volume log as he said[1].
>>
>> [1]:
>> http://lists.openstack.org/pipermail/openstack-dev/2014-September

Re: [openstack-dev] [Openstack] No one replying on tempest issue?Please share your experience

2014-09-23 Thread Denis Makogon
On Tue, Sep 23, 2014 at 12:40 PM, Nikesh Kumar Mahalka <
nikeshmaha...@vedams.com> wrote:

> Hi,
> I am able to do all volume operations through dashboard and cli commands.
> But when i am running tempest tests,some tests are getting failed.
> For contributing cinder volume driver for my client in cinder,do all
> tempest tests should passed?
>
> Ex:
> 1)
> ./run_tempest.sh tempest.api.volume.test_volumes_snapshots : 1or 2 tests
> are getting failed
>
>
Just as Jay said, to find out what's going wrong you have to tail
cinder/nova logs while running tests.
There are two things you should keep an eye on:
1. Behaviour (to be clear, does test scenarios coresponds to what actually
happening).
2. Resource utilization (may cause errors because of something went wrong,
going back to log analyzation)


> But when i am running individual tests in "test_volumes_snapshots",all
> tests are getting passed.
>
> 2)
> ./run_tempest.sh
> tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_upload:
> This is also getting failed.
>
>
>
> Regards
> Nikesh
>
> On Mon, Sep 22, 2014 at 4:12 PM, Ken'ichi Ohmichi 
> wrote:
>
>> Hi Nikesh,
>>
>> > -Original Message-
>> > From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
>> > Sent: Saturday, September 20, 2014 9:49 PM
>> > To: openst...@lists.openstack.org; OpenStack Development Mailing List
>> (not for usage questions)
>> > Subject: Re: [Openstack] No one replying on tempest issue?Please share
>> your experience
>> >
>> > Still i didnot get any reply.
>>
>> Jay has already replied to this mail, please check the nova-compute
>> and cinder-volume log as he said[1].
>>
>> [1]:
>> http://lists.openstack.org/pipermail/openstack-dev/2014-September/046147.html
>>
>> > Now i ran below command:
>> > ./run_tempest.sh
>> tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTest.test_volume_from_snapshot
>> >
>> > and i am getting test failed.
>> >
>> >
>> > Actually,after analyzing tempest.log,i found that:
>> > during creation of a volume from snapshot,tearDownClass is called and
>> it is deleting snapshot bfore creation of volume
>> > and my test is getting failed.
>>
>> I guess the failure you mentioned at the above is:
>>
>> 2014-09-20 00:42:12.519 10684 INFO tempest.common.rest_client
>> [req-d4dccdcd-bbfa-4ddf-acd8-5a7dcd5b15db None] Request
>> (VolumesSnapshotTest:tearDownClass): 404 GET
>>
>> http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/snapshots/71d3cad4-440d-4fbb-8758-76da17b6ace6
>> 0.029s
>>
>> and
>>
>> 2014-09-20 00:42:22.511 10684 INFO tempest.common.rest_client
>> [req-520a54ad-7e0a-44ba-95c0-17f4657bc3b0 None] Request
>> (VolumesSnapshotTest:tearDownClass): 404 GET
>>
>> http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/volumes/7469271a-d2a7-4ee6-b54a-cd0bf767be6b
>> 0.034s
>>
>> right?
>> If so, that is not a problem.
>> VolumesSnapshotTest creates two volumes, and the tearDownClass checks
>> these
>> volumes deletions by getting volume status until 404(NotFound) [2].
>>
>> [2]:
>> https://github.com/openstack/tempest/blob/master/tempest/api/volume/base.py#L128
>>
>> > I deployed a juno devstack setup for a cinder volume driver.
>> > I changed cinder.conf file and tempest.conf file for single backend and
>> restarted cinder services.
>> > Now i ran tempest test as below:
>> > /opt/stack/tempest/run_tempest.sh
>> tempest.api.volume.test_volumes_snapshots
>> >
>> > I am getting below output:
>> >  Traceback (most recent call last):
>> >   File
>> "/opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py", line
>> 176, in test_volume_from_snapshot
>> > snapshot = self.create_snapshot(self.volume_origin['id'])
>> >   File "/opt/stack/tempest/tempest/api/volume/base.py", line 112, in
>> create_snapshot
>> > 'available')
>> >   File
>> "/opt/stack/tempest/tempest/services/volume/json/snapshots_client.py", line
>> 126, in wait_for_snapshot_status
>> > value = self._get_snapshot_status(snapshot_id)
>> >   File
>> "/opt/stack/tempest/tempest/services/volume/json/snapshots_client.py", line
>> 99, in _get_snapshot_status
>> > snapshot_id=snapshot_id)
>> > SnapshotBuildErrorException: Snapshot
>> 6b1eb319-33ef-4357-987a-58eb15549520 failed to build and is in
>> > ERROR status
>>
>> What happens if running the same operation as Tempest by hands on your
>> environment like the following ?
>>
>> [1] $ cinder create 1
>> [2] $ cinder snapshot-create 
>>
> [3] cinder snapshot-show  (see if snapshot was baked properly)

> [3] $ cinder create --snapshot-id  1
>> [4] $ cinder show 
>>
>> Please check whether the status of created volume at [3] is "available"
>> or not.
>>
>> Thanks
>> Ken'ichi Ohmichi
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openst...@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>
>
> _

Re: [openstack-dev] [Openstack] No one replying on tempest issue?Please share your experience

2014-09-23 Thread Nikesh Kumar Mahalka
Hi,
I am able to do all volume operations through dashboard and cli commands.
But when i am running tempest tests,some tests are getting failed.
For contributing cinder volume driver for my client in cinder,do all
tempest tests should passed?

Ex:
1)
./run_tempest.sh tempest.api.volume.test_volumes_snapshots : 1or 2 tests
are getting failed

But when i am running individual tests in "test_volumes_snapshots",all
tests are getting passed.

2)
./run_tempest.sh
tempest.api.volume.test_volumes_actions.VolumesV2ActionsTest.test_volume_upload:
This is also getting failed.



Regards
Nikesh

On Mon, Sep 22, 2014 at 4:12 PM, Ken'ichi Ohmichi 
wrote:

> Hi Nikesh,
>
> > -Original Message-
> > From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
> > Sent: Saturday, September 20, 2014 9:49 PM
> > To: openst...@lists.openstack.org; OpenStack Development Mailing List
> (not for usage questions)
> > Subject: Re: [Openstack] No one replying on tempest issue?Please share
> your experience
> >
> > Still i didnot get any reply.
>
> Jay has already replied to this mail, please check the nova-compute
> and cinder-volume log as he said[1].
>
> [1]:
> http://lists.openstack.org/pipermail/openstack-dev/2014-September/046147.html
>
> > Now i ran below command:
> > ./run_tempest.sh
> tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTest.test_volume_from_snapshot
> >
> > and i am getting test failed.
> >
> >
> > Actually,after analyzing tempest.log,i found that:
> > during creation of a volume from snapshot,tearDownClass is called and it
> is deleting snapshot bfore creation of volume
> > and my test is getting failed.
>
> I guess the failure you mentioned at the above is:
>
> 2014-09-20 00:42:12.519 10684 INFO tempest.common.rest_client
> [req-d4dccdcd-bbfa-4ddf-acd8-5a7dcd5b15db None] Request
> (VolumesSnapshotTest:tearDownClass): 404 GET
>
> http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/snapshots/71d3cad4-440d-4fbb-8758-76da17b6ace6
> 0.029s
>
> and
>
> 2014-09-20 00:42:22.511 10684 INFO tempest.common.rest_client
> [req-520a54ad-7e0a-44ba-95c0-17f4657bc3b0 None] Request
> (VolumesSnapshotTest:tearDownClass): 404 GET
>
> http://192.168.2.153:8776/v1/ff110b66c98d455092c6f2a2577b4c80/volumes/7469271a-d2a7-4ee6-b54a-cd0bf767be6b
> 0.034s
>
> right?
> If so, that is not a problem.
> VolumesSnapshotTest creates two volumes, and the tearDownClass checks these
> volumes deletions by getting volume status until 404(NotFound) [2].
>
> [2]:
> https://github.com/openstack/tempest/blob/master/tempest/api/volume/base.py#L128
>
> > I deployed a juno devstack setup for a cinder volume driver.
> > I changed cinder.conf file and tempest.conf file for single backend and
> restarted cinder services.
> > Now i ran tempest test as below:
> > /opt/stack/tempest/run_tempest.sh
> tempest.api.volume.test_volumes_snapshots
> >
> > I am getting below output:
> >  Traceback (most recent call last):
> >   File
> "/opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py", line
> 176, in test_volume_from_snapshot
> > snapshot = self.create_snapshot(self.volume_origin['id'])
> >   File "/opt/stack/tempest/tempest/api/volume/base.py", line 112, in
> create_snapshot
> > 'available')
> >   File
> "/opt/stack/tempest/tempest/services/volume/json/snapshots_client.py", line
> 126, in wait_for_snapshot_status
> > value = self._get_snapshot_status(snapshot_id)
> >   File
> "/opt/stack/tempest/tempest/services/volume/json/snapshots_client.py", line
> 99, in _get_snapshot_status
> > snapshot_id=snapshot_id)
> > SnapshotBuildErrorException: Snapshot
> 6b1eb319-33ef-4357-987a-58eb15549520 failed to build and is in
> > ERROR status
>
> What happens if running the same operation as Tempest by hands on your
> environment like the following ?
>
> [1] $ cinder create 1
> [2] $ cinder snapshot-create 
> [3] $ cinder create --snapshot-id  1
> [4] $ cinder show 
>
> Please check whether the status of created volume at [3] is "available" or
> not.
>
> Thanks
> Ken'ichi Ohmichi
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev