Re: [openstack-dev] [craton] Nomination of Thomas Maddox as Craton core

2017-03-23 Thread git harry
+1


From: Jim Baker 
Sent: 21 March 2017 20:41
To: OpenStack Development Mailing List
Subject: [openstack-dev] [craton] Nomination of Thomas Maddox as Craton core

I nominate Thomas Maddox as a core reviewer for the Craton project.

Thomas has shown extensive knowledge of Craton, working across a range of 
issues in the core service, including down to the database modeling; the 
client; and corresponding bugs, blueprints, and specs. Perhaps most notably he 
has contributed a number of end-to-end patches, such as his work with project 
support.
https://review.openstack.org/#/q/owner:thomas.maddox

He has also expertly helped across a range of reviews, while always being 
amazingly positive with other team members and potential contributors:
https://review.openstack.org/#/q/reviewer:thomas.maddox

Other details can be found here on his contributions:
http://stackalytics.com/report/users/thomas-maddox

In my opinion, Thomas has proven that he will make a fantastic addition to the 
core review team. In particular, I'm confident Thomas will help further improve 
the velocity for our project as a whole as a core reviewer. I hope others 
concur with me in this assessment!

- Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] multiple backend issue

2014-07-19 Thread git harry
Ah, okay I misunderstood. It looks like you've used the same config file on 
both the controller and compute nodes, notice how the output of cinder-manage 
gives you hosts corresponding to both backends on your two nodes.

 controller@lvmdriver-2 nova
 controller@lvmdriver-1 nova
 Compute@lvmdriver-1 nova
 Compute@lvmdriver-2 nova

Each cinder-volume service you are running has tried to setup both backends 
even though only one of the volume groups is available to them. The 
enabled_backends should correspond to what that particular cinder-volume 
service is responsible for and you only need to specify the backend 
configuration groups that that specific volume group will use.

controller:


enabled_backends=lvmdriver-1

[lvmdriver-1]

volume_group=cinder-volumes-1

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

volume_backend_name=LVM_iSCSI


compute:


enabled_backends=lvmdriver-2

[lvmdriver-2]

volume_group=cinder-volumes-2

volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

volume_backend_name=LVM_iSCSI_b


> From: johnson.ch...@qsantechnology.com
> To: openstack-dev@lists.openstack.org
> Date: Fri, 18 Jul 2014 16:33:10 +
> Subject: Re: [openstack-dev] [Cinder] multiple backend issue
>
> Dear git-harry,
>
> My confuse is why I can successfully create volume on both controller node 
> and compute node, but it still has error message in cinder-volume.log?
>
> The below is my environment,
> Controller node:
> Install cinder-api, cinder-schedule, cinder-volume
> Create cinder-volume-1 volume group
> Compute node:
> Install cinder-volume
> Create cinder-volume-2 volume group
>
> The below is the output of "cinder extra-specs-list",
> +--++--+
> | ID | Name | extra_specs |
> +--++--+
> | 30faffa9-7955-484f-9c96-3f40507aa62e | lvm_compute | 
> {u'volume_backend_name': u'LVM_iSCSI_b'} |
> | c2341962-b15e-4003-882f-08a8a36d3a0f | lvm_controller | 
> {u'volume_backend_name': u'LVM_iSCSI'} |
> +--++--+
>
> The below is the output of " cinder-manage host list"
> host zone
> controller nova
> Compute nova
> controller@lvmdriver-2 nova
> controller@lvmdriver-1 nova
> Compute@lvmdriver-1 nova
> Compute@lvmdriver-2 nova
>
> So I just make sure if everything is right at my environment.
>
> Regards,
> Johnson
>
>
> -Original Message-
> From: git harry [mailto:git-ha...@live.co.uk]
> Sent: Friday, July 18, 2014 4:08 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Cinder] multiple backend issue
>
> I don't know what you mean by side effects, if the fact that it (lvmdriver-2) 
> doesn't work is not a problem for you. You will also continue to get entries 
> in the log informing you the driver is uninitialised.
>
> The volume group needs to be on the same host as the cinder-volume service - 
> so it sounds like the service is running on your controller only. If you want 
> to locate volumes on the compute host you will need to install the service 
> there.
>
>
> 
>> From: johnson.ch...@qsantechnology.com
>> To: openstack-dev@lists.openstack.org
>> Date: Thu, 17 Jul 2014 15:39:40 +
>> Subject: Re: [openstack-dev] [Cinder] multiple backend issue
>>
>> Dear git-harry,
>>
>> I have created a volume group "cinder-volume-1" at my controller node, and 
>> another volume group "cinder-volume-2" at my compute node.
>>
>> I can create volume successfully on dedicated backend.
>> Of course I can ignore the error message, but I have to know if any 
>> side-effect?
>>
>> Regards,
>> Johnson
>>
>> -Original Message-
>> From: git harry [mailto:git-ha...@live.co.uk]
>> Sent: Thursday, July 17, 2014 7:32 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Cinder] multiple backend issue
>>
>> You are using multibackend but it appears you haven't created both volume 
>> groups:
>>
>> Stderr: ' Volume group "cinder-volumes-2" not found\n'
>>
>> If you can create volumes it suggest the other backend is correctly 
>> configured. So you can ignore the error if you want but you will not be able 
>> to use the second backend you have attempted to setup.
>>
>> __

Re: [openstack-dev] [Cinder] multiple backend issue

2014-07-18 Thread git harry
I don't know what you mean by side effects, if the fact that it (lvmdriver-2) 
doesn't work is not a problem for you. You will also continue to get entries in 
the log informing you the driver is uninitialised.

The volume group needs to be on the same host as the cinder-volume service - so 
it sounds like the service is running on your controller only. If you want to 
locate volumes on the compute host you will need to install the service there.



> From: johnson.ch...@qsantechnology.com
> To: openstack-dev@lists.openstack.org
> Date: Thu, 17 Jul 2014 15:39:40 +
> Subject: Re: [openstack-dev] [Cinder] multiple backend issue
>
> Dear git-harry,
>
> I have created a volume group "cinder-volume-1" at my controller node, and 
> another volume group "cinder-volume-2" at my compute node.
>
> I can create volume successfully on dedicated backend.
> Of course I can ignore the error message, but I have to know if any 
> side-effect?
>
> Regards,
> Johnson
>
> -Original Message-
> From: git harry [mailto:git-ha...@live.co.uk]
> Sent: Thursday, July 17, 2014 7:32 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Cinder] multiple backend issue
>
> You are using multibackend but it appears you haven't created both volume 
> groups:
>
> Stderr: ' Volume group "cinder-volumes-2" not found\n'
>
> If you can create volumes it suggest the other backend is correctly 
> configured. So you can ignore the error if you want but you will not be able 
> to use the second backend you have attempted to setup.
>
> 
>> From: johnson.ch...@qsantechnology.com
>> To: openstack-dev@lists.openstack.org
>> Date: Thu, 17 Jul 2014 11:03:41 +
>> Subject: [openstack-dev] [Cinder] multiple backend issue
>>
>>
>> Dear All,
>>
>>
>>
>> I have two machines as below,
>>
>> Machine1 (192.168.106.20): controller node (cinder node and volume
>> node)
>>
>> Machine2 (192.168.106.30): compute node (volume node)
>>
>>
>>
>> I can successfully create a cinder volume, but there is an error in
>> cinder-volume.log.
>>
>> 2014-07-17 18:49:01.105 5765 AUDIT cinder.service [-] Starting
>> cinder-volume node (version 2014.1)
>>
>> 2014-07-17 18:49:01.113 5765 INFO cinder.volume.manager
>> [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Starting volume
>> driver L
>>
>> VMISCSIDriver (2.0.0)
>>
>> 2014-07-17 18:49:01.114 5764 AUDIT cinder.service [-] Starting
>> cinder-volume node (version 2014.1)
>>
>> 2014-07-17 18:49:01.124 5764 INFO cinder.volume.manager
>> [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Starting volume
>> driver L
>>
>> VMISCSIDriver (2.0.0)
>>
>> 2014-07-17 18:49:01.965 5765 ERROR cinder.volume.manager
>> [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Error encountered
>> durin
>>
>> g initialization of driver: LVMISCSIDriver
>>
>> 2014-07-17 18:49:01.971 5765 ERROR cinder.volume.manager
>> [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Unexpected error
>> while
>>
>> running command.
>>
>> Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C
>> vgs --noheadings -o name cinder-volumes-2
>>
>> Exit code: 5
>>
>> Stdout: ''
>>
>> Stderr: ' Volume group "cinder-volumes-2" not found\n'
>>
>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Traceback
>> (most recent call last):
>>
>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File
>> "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 243
>>
>> , in init_host
>>
>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager
>> self.driver.check_for_setup_error()
>>
>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File
>> "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py", line
>>
>> 83, in check_for_setup_error
>>
>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager
>> executor=self._execute)
>>
>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File
>> "/usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py", lin
>>
>> e 81, in __init__
>>
>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager if
>> self._vg_exists() is False:
>>
>> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File

Re: [openstack-dev] [Cinder] multiple backend issue

2014-07-17 Thread git harry
You are using multibackend but it appears you haven't created both volume 
groups:

Stderr: ' Volume group "cinder-volumes-2" not found\n'

If you can create volumes it suggest the other backend is correctly configured. 
So you can ignore the error if you want but you will not be able to use the 
second backend you have attempted to setup.


> From: johnson.ch...@qsantechnology.com 
> To: openstack-dev@lists.openstack.org 
> Date: Thu, 17 Jul 2014 11:03:41 + 
> Subject: [openstack-dev] [Cinder] multiple backend issue 
> 
> 
> Dear All, 
> 
> 
> 
> I have two machines as below, 
> 
> Machine1 (192.168.106.20): controller node (cinder node and 
> volume node) 
> 
> Machine2 (192.168.106.30): compute node (volume node) 
> 
> 
> 
> I can successfully create a cinder volume, but there is an error in 
> cinder-volume.log. 
> 
> 2014-07-17 18:49:01.105 5765 AUDIT cinder.service [-] Starting 
> cinder-volume node (version 2014.1) 
> 
> 2014-07-17 18:49:01.113 5765 INFO cinder.volume.manager 
> [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Starting volume 
> driver L 
> 
> VMISCSIDriver (2.0.0) 
> 
> 2014-07-17 18:49:01.114 5764 AUDIT cinder.service [-] Starting 
> cinder-volume node (version 2014.1) 
> 
> 2014-07-17 18:49:01.124 5764 INFO cinder.volume.manager 
> [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Starting volume 
> driver L 
> 
> VMISCSIDriver (2.0.0) 
> 
> 2014-07-17 18:49:01.965 5765 ERROR cinder.volume.manager 
> [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Error encountered 
> durin 
> 
> g initialization of driver: LVMISCSIDriver 
> 
> 2014-07-17 18:49:01.971 5765 ERROR cinder.volume.manager 
> [req-82bf4ed2-0076-4f75-9d5b-9e9945cd6be2 - - - - -] Unexpected error 
> while 
> 
> running command. 
> 
> Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C 
> vgs --noheadings -o name cinder-volumes-2 
> 
> Exit code: 5 
> 
> Stdout: '' 
> 
> Stderr: ' Volume group "cinder-volumes-2" not found\n' 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Traceback 
> (most recent call last): 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
> "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 243 
> 
> , in init_host 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
> self.driver.check_for_setup_error() 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
> "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py", line 
> 
> 83, in check_for_setup_error 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
> executor=self._execute) 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
> "/usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py", lin 
> 
> e 81, in __init__ 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager if 
> self._vg_exists() is False: 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
> "/usr/lib/python2.7/dist-packages/cinder/brick/local_dev/lvm.py", lin 
> 
> e 106, in _vg_exists 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
> self.vg_name, root_helper=self._root_helper, run_as_root=True) 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
> "/usr/lib/python2.7/dist-packages/cinder/utils.py", line 136, in exec 
> 
> ute 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager return 
> processutils.execute(*cmd, **kwargs) 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager File 
> "/usr/lib/python2.7/dist-packages/cinder/openstack/common/processutil 
> 
> s.py", line 173, in execute 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager cmd=' 
> '.join(cmd)) 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
> ProcessExecutionError: Unexpected error while running command. 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Command: sudo 
> cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --n 
> 
> oheadings -o name cinder-volumes-2 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Exit code: 5 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Stdout: '' 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager Stderr: ' 
> Volume group "cinder-volumes-2" not found\n' 
> 
> 2014-07-17 18:49:01.971 5765 TRACE cinder.volume.manager 
> 
> 2014-07-17 18:49:03.236 5765 INFO oslo.messaging._drivers.impl_rabbit 
> [-] Connected to AMQP server on controller:5672 
> 
> 2014-07-17 18:49:03.890 5764 INFO cinder.volume.manager 
> [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] volume 
> 5811b9af-b24a-44f 
> 
> e-a424-a61f011f7a4c: skipping export 
> 
> 2014-07-17 18:49:03.891 5764 INFO cinder.volume.manager 
> [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] volume 
> 8266e05b-6c87-421 
> 
> a-a625-f5d6e94f2c9f: skipping export 
> 
> 2014-07-17 18:49:03.892 5764 INFO cinder.volume.manager 
> [req-cf7cf804-8c47-455a-b725-3c2154b60812 - - - - -] Updating v

[openstack-dev] [cinder][oslo] Serialising exceptions

2014-04-14 Thread git harry
A bug has been submitted, https://bugs.launchpad.net/cinder/+bug/1301249, which 
shows a failure when oslo.messaging tries to serialise an exception with 
jsonutils - a ValueError is raised. I've had a search through the code and I'm 
pretty sure there are around 50+ cases where this will happen:

cinder/backup/drivers/swift.py raise exception.SwiftConnectionFailed(reason=err)
cinder/backup/drivers/swift.py raise exception.SwiftConnectionFailed(reason=err)
cinder/backup/drivers/swift.py raise exception.SwiftConnectionFailed(reason=err)
cinder/backup/drivers/swift.py raise exception.SwiftConnectionFailed(reason=err)
cinder/backup/drivers/swift.py raise exception.SwiftConnectionFailed(reason=err)
cinder/backup/drivers/swift.py raise exception.SwiftConnectionFailed(reason=err)
cinder/volume/driver.py raise exception.ExportFailure(reason=ex)
cinder/volume/drivers/coraid.py raise exception.CoraidESMNotAvailable(reason=e)
cinder/volume/drivers/netapp/api.py raise NaApiError('Unexpected error', e)
cinder/volume/drivers/san/hp/hp_3par_common.py raise exception.InvalidInput(ex)
cinder/volume/drivers/san/hp/hp_3par_common.py raise 
exception.CinderException(ex)
cinder/volume/drivers/san/hp/hp_3par_common.py raise 
exception.CinderException(ex)
cinder/volume/drivers/san/hp/hp_3par_common.py raise 
exception.CinderException(ex)
cinder/volume/drivers/san/hp/hp_3par_common.py raise 
exception.CinderException(ex)
cinder/volume/drivers/san/hp/hp_3par_common.py raise 
exception.CinderException(ex)
cinder/volume/drivers/san/hp/hp_3par_common.py raise 
exception.CinderException(ex)
cinder/volume/drivers/san/hp/hp_3par_common.py raise 
exception.CinderException(ex)
cinder/volume/drivers/san/hp/hp_3par_common.py raise 
exception.CinderException(ex)
cinder/volume/drivers/san/hp/hp_3par_common.py raise 
exception.CinderException(ex)
cinder/volume/drivers/san/hp/hp_lefthand_cliq_proxy.py raise 
exception.SnapshotIsBusy(ex)
cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py raise 
exception.DriverNotInitialized(ex)
cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py raise 
exception.VolumeBackendAPIException(ex)
cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py raise 
exception.VolumeBackendAPIException(ex)
cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py raise 
exception.VolumeBackendAPIException(ex)
cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py raise 
exception.VolumeBackendAPIException(ex)
cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py raise 
exception.SnapshotIsBusy(ex)
cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py raise 
exception.VolumeBackendAPIException(ex)
cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py raise 
exception.VolumeBackendAPIException(ex)
cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py raise 
exception.VolumeBackendAPIException(ex)
cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py raise 
exception.VolumeBackendAPIException(ex)
cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py raise 
exception.VolumeBackendAPIException(ex)
cinder/volume/drivers/san/hp/hp_msa_common.py raise exception.Invalid(ex)
cinder/volume/drivers/san/hp/hp_msa_common.py raise exception.Invalid(ex)
cinder/volume/drivers/san/hp/hp_msa_common.py raise exception.Invalid(ex)
cinder/volume/drivers/san/hp/hp_msa_common.py raise exception.Invalid(ex)
cinder/volume/drivers/san/hp/hp_msa_common.py raise exception.Invalid(ex)
cinder/volume/drivers/san/hp/hp_msa_common.py raise exception.Invalid(ex)
cinder/volume/drivers/san/hp/hp_msa_common.py raise exception.Invalid(ex)
cinder/volume/drivers/san/hp/hp_msa_common.py raise exception.Invalid(ex)
cinder/volume/drivers/san/hp/hp_msa_common.py raise exception.Invalid(ex)
cinder/volume/drivers/vmware/vim.py raise 
error_util.VimFaultException(fault_list, excep)
cinder/volume/flows/manager/create_volume.py raise 
exception.MetadataCopyFailure(reason=ex)
cinder/volume/flows/manager/create_volume.py raise 
exception.MetadataUpdateFailure(reason=ex)
cinder/volume/flows/manager/create_volume.py raise 
exception.MetadataUpdateFailure(reason=ex)
cinder/volume/flows/manager/create_volume.py raise 
exception.ImageUnacceptable(ex)

There seem to me to be three ways to fix this:
1. throw six.text_type round all the arguments that are exceptions - although 
this doesn't stop the same thing happening again
2. modify CinderException so that if message is an exception or, args or kwarg 
contains one it get converted to a string
3. modify jsonutils.py in oslo-incubator to automatically convert exceptions to 
strings.

Does anyone have any thoughts on this? I lean towards trying to get an 
additional test in to_primitive in jsonutils to convert exceptions but I don't 
know if there is a reason why that isn't already done.

Thanks,
git-harry 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Blueprint cinder-rbd-driver-qos

2014-03-05 Thread git harry
Hi,

https://blueprints.launchpad.net/cinder/+spec/cinder-rbd-driver-qos

I've been looking at this blueprint with a view to contributing on it, assuming 
I can take it. I am unclear as to whether or not it is still valid. I can see 
that it was registered around a year ago and it appears the functionality is 
essentially already supported by using multiple backends.

Looking at the existing drivers that have qos support it appears IOPS etc are 
available for control/customisation. As I understand it  Ceph has no qos type 
control built-in and creating pools using different hardware is as granular as 
it gets. The two don't quite seem comparable to me so I was hoping to get some 
feedback, as to whether or not this is still useful/appropriate, before 
attempting to do any work.

Thanks,
git-harry 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev