Re: [openstack-dev] [cinder] about block device driver

2018-08-01 Thread John Griffith
On Fri, Jul 27, 2018 at 8:44 AM Matt Riedemann  wrote:

> On 7/16/2018 4:20 AM, Gorka Eguileor wrote:
> > If I remember correctly the driver was deprecated because it had no
> > maintainer or CI.  In Cinder we require our drivers to have both,
> > otherwise we can't guarantee that they actually work or that anyone will
> > fix it if it gets broken.
>
> Would this really require 3rd party CI if it's just local block storage
> on the compute node (in devstack)? We could do that with an upstream CI
> job right? We already have upstream CI jobs for things like rbd and nfs.
> The 3rd party CI requirements generally are for proprietary storage
> backends.
>
> I'm only asking about the CI side of this, the other notes from Sean
> about tweaking the LVM volume backend and feature parity are good
> reasons for removal of the unmaintained driver.
>
> Another option is using the nova + libvirt + lvm image backend for local
> (to the VM) ephemeral disk:
>
>
> https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/virt/libvirt/imagebackend.py#L653
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


We've had this conversation multiple times, here were the results from past
conversations and the reasons we deprecated:
1. Driver was not being tested at all (no CI, no upstream tests etc)
2. We sent out numerous requests trying to determine if anybody was using
the driver, didn't receive much feedback
3. The driver didn't work for an entire release, this indicated that
perhaps it wasn't that valuable
4. The driver is unable to implement a number of the required features for
a Cinder Block Device
5. Digging deeper into performance tests most comparisons were doing things
like
a. Using the shared single nic that's used for all of the cluster
communications (ie DB, APIs, Rabbit etc)
b. Misconfigured deployment, ie using a 1Gig Nic for iSCSI connections
(also see above)

The decision was that raw-block was not by definition a "Cinder Device",
and given that it wasn't really tested or
maintained that it should be removed.  LVM is actually quite good, we did
some pretty extensive testing and even
presented it as a session in Barcelona that showed perf within
approximately 10%.  I'm skeptical any time I see
dramatic comparisons of 1/2 performance, but I could be completely wrong.

I would be much more interested in putting efforts towards trying to figure
out why you have such a large perf
delta and see if we can address that as opposed to trying to bring back and
maintain a driver that only half
works.

Or as Jay Pipes mentioned, don't use Cinder in your case.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-27 Thread Matt Riedemann

On 7/16/2018 4:20 AM, Gorka Eguileor wrote:

If I remember correctly the driver was deprecated because it had no
maintainer or CI.  In Cinder we require our drivers to have both,
otherwise we can't guarantee that they actually work or that anyone will
fix it if it gets broken.


Would this really require 3rd party CI if it's just local block storage 
on the compute node (in devstack)? We could do that with an upstream CI 
job right? We already have upstream CI jobs for things like rbd and nfs. 
The 3rd party CI requirements generally are for proprietary storage 
backends.


I'm only asking about the CI side of this, the other notes from Sean 
about tweaking the LVM volume backend and feature parity are good 
reasons for removal of the unmaintained driver.


Another option is using the nova + libvirt + lvm image backend for local 
(to the VM) ephemeral disk:


https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/virt/libvirt/imagebackend.py#L653

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-24 Thread Sean McGinnis
On Tue, Jul 24, 2018 at 06:07:24PM +0800, Rambo wrote:
> Hi,all
> 
> 
>  In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
> being deprecated, and was eventually be removed with the Queens release.
> 
> 
> https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
>  
> 
> 
>  However,I want to use it out of tree,but I don't know how to use it out 
> of tree,Can you share me a doc? Thank you very much!
> 

I don't think we have any community documentation on how to use out of tree
drivers, but it's fairly straightforward.

You can just drop in that block_device.py file in the cinder/volumes/drivers
directory and configure its use in cinder.conf using the same volume_driver
setting as before.

I'm not sure if anything has been changed since Ocata that would require
updates to the driver, but I would expect most base functionality should still
work. But just a word of warning that there may be some updates to the driver
needed if you find issues with it.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] about block device driver

2018-07-24 Thread Rambo
Hi,all


 In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
being deprecated, and was eventually be removed with the Queens release.


https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
 


 However,I want to use it out of tree,but I don't know how to use it out of 
tree,Can you share me a doc? Thank you very much!


















Best Regards
Rambo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Rambo
yes
 
 
-- Original --
From:  "Ivan Kolodyazhny";
Date:  Tue, Jul 17, 2018 05:00 PM
To:  "OpenStack Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about block device driver

 
Do you use the volumes on the same nodes where instances are located?

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/



 
On Tue, Jul 17, 2018 at 11:52 AM, Rambo  wrote:
yes,My cinder driver is  LVM+LIO.I have upload the test result in  appendix.Can 
you show me your test results?Thank you!



 
 
-- Original --
From:  "Ivan Kolodyazhny";
Date:  Tue, Jul 17, 2018 04:09 PM
To:  "OpenStack Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about block device driver



 
Rambo,

Did you try to use LVM+LIO target driver? It shows pretty good performance 
comparing to BlockDeviceDriver,


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/



 
On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:
Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a 
viable option - benchmarked them several times, unsatisfactory 
results.Sometimes it's IOPS is twice as bad,could you show me your test 
data?Thank you!





Cheers,
Rambo
 
 
-- Original --
From: "Sean McGinnis"; 
Date: 2018年7月16日(星期一) 晚上9:32
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 





__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

D7C81B68@5B350B78.B0B54D5B
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Ivan Kolodyazhny
Do you use the volumes on the same nodes where instances are located?

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Tue, Jul 17, 2018 at 11:52 AM, Rambo  wrote:

> yes,My cinder driver is  LVM+LIO.I have upload the test result in
> appendix.Can you show me your test results?Thank you!
>
>
>
> -- Original --
> *From: * "Ivan Kolodyazhny";
> *Date: * Tue, Jul 17, 2018 04:09 PM
> *To: * "OpenStack Developmen";
> *Subject: * Re: [openstack-dev] [cinder] about block device driver
>
> Rambo,
>
> Did you try to use LVM+LIO target driver? It shows pretty good performance
> comparing to BlockDeviceDriver,
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
> On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:
>
>> Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is
>> not a viable option - benchmarked them several times, unsatisfactory
>> results.Sometimes it's IOPS is twice as bad,could you show me your test
>> data?Thank you!
>>
>>
>>
>> Cheers,
>> Rambo
>>
>>
>> ---------- Original ------
>> *From:* "Sean McGinnis";
>> *Date:* 2018年7月16日(星期一) 晚上9:32
>> *To:* "OpenStack Developmen";
>> *Subject:* Re: [openstack-dev] [cinder] about block device driver
>>
>> On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
>> > On 16/07, Rambo wrote:
>> > > Well,in my opinion,the BlockDeviceDriver is more suitable than any
>> other solution for data processing scenarios.Does the community will agree
>> to merge the BlockDeviceDriver to the Cinder repository again if our
>> company hold the maintainer and CI?
>> > >
>> >
>> > Hi,
>> >
>> > I'm sure the community will be happy to merge the driver back into the
>> > repository.
>> >
>>
>> The other reason for its removal was its inability to meet the minimum
>> feature
>> set required for Cinder drivers along with benchmarks showing the LVM and
>> iSCSI
>> driver could be tweaked to have similar or better performance.
>>
>> The other option would be to not use Cinder volumes so you just use local
>> storage on your compute nodes.
>>
>> Readding the block device driver is not likely an option.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


99BB7964@8738C509.52AE4D5B.png
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Rambo
yes,My cinder driver is  LVM+LIO.I have upload the test result in  appendix.Can 
you show me your test results?Thank you!



 
 
-- Original --
From:  "Ivan Kolodyazhny";
Date:  Tue, Jul 17, 2018 04:09 PM
To:  "OpenStack Developmen"; 

Subject:  Re: [openstack-dev] [cinder] about block device driver

 
Rambo,

Did you try to use LVM+LIO target driver? It shows pretty good performance 
comparing to BlockDeviceDriver,


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/



 
On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:
Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a 
viable option - benchmarked them several times, unsatisfactory 
results.Sometimes it's IOPS is twice as bad,could you show me your test 
data?Thank you!





Cheers,
Rambo
 
 
-- Original --
From: "Sean McGinnis"; 
Date: 2018年7月16日(星期一) 晚上9:32
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  




__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

99BB7964@8738C509.52AE4D5B.png
Description: Binary data


test2639.png
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Ivan Kolodyazhny
Rambo,

Did you try to use LVM+LIO target driver? It shows pretty good performance
comparing to BlockDeviceDriver,

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Tue, Jul 17, 2018 at 10:24 AM, Rambo  wrote:

> Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is
> not a viable option - benchmarked them several times, unsatisfactory
> results.Sometimes it's IOPS is twice as bad,could you show me your test
> data?Thank you!
>
>
>
> Cheers,
> Rambo
>
>
> -- Original --
> *From:* "Sean McGinnis";
> *Date:* 2018年7月16日(星期一) 晚上9:32
> *To:* "OpenStack Developmen";
> *Subject:* Re: [openstack-dev] [cinder] about block device driver
>
> On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> > On 16/07, Rambo wrote:
> > > Well,in my opinion,the BlockDeviceDriver is more suitable than any
> other solution for data processing scenarios.Does the community will agree
> to merge the BlockDeviceDriver to the Cinder repository again if our
> company hold the maintainer and CI?
> > >
> >
> > Hi,
> >
> > I'm sure the community will be happy to merge the driver back into the
> > repository.
> >
>
> The other reason for its removal was its inability to meet the minimum
> feature
> set required for Cinder drivers along with benchmarks showing the LVM and
> iSCSI
> driver could be tweaked to have similar or better performance.
>
> The other option would be to not use Cinder volumes so you just use local
> storage on your compute nodes.
>
> Readding the block device driver is not likely an option.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-17 Thread Rambo
Oh,the instances using Cinder perform intense I/O, thus iSCSI or LVM is not a 
viable option - benchmarked them several times, unsatisfactory 
results.Sometimes it's IOPS is twice as bad,could you show me your test 
data?Thank you!





Cheers,
Rambo
 
 
-- Original --
From: "Sean McGinnis"; 
Date: 2018年7月16日(星期一) 晚上9:32
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Rambo
But I want to create a volume backed server for data processing scenarios,maybe 
the BlockDeviceDriver is more suitable. 
 
-- Original --
From: "Sean McGinnis"; 
Date: 2018年7月16日(星期一) 晚上9:32
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Jay Pipes

On 07/16/2018 10:15 AM, arkady.kanev...@dell.com wrote:

Is this for ephemeral storage handling?


For both ephemeral as well as root disk.

In other words, just act like Cinder isn't there and attach a big local 
root disk to the instance.


Best,
-jay


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, July 16, 2018 8:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] about block device driver

On 07/16/2018 09:32 AM, Sean McGinnis wrote:

The other option would be to not use Cinder volumes so you just use
local storage on your compute nodes.


^^ yes, this.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Arkady.Kanevsky
Is this for ephemeral storage handling?

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Monday, July 16, 2018 8:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [cinder] about block device driver

On 07/16/2018 09:32 AM, Sean McGinnis wrote:
> The other option would be to not use Cinder volumes so you just use 
> local storage on your compute nodes.

^^ yes, this.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Jay Pipes

On 07/16/2018 09:32 AM, Sean McGinnis wrote:

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.


^^ yes, this.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Sean McGinnis
On Mon, Jul 16, 2018 at 01:32:26PM +0200, Gorka Eguileor wrote:
> On 16/07, Rambo wrote:
> > Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> > solution for data processing scenarios.Does the community will agree to 
> > merge the BlockDeviceDriver to the Cinder repository again if our company 
> > hold the maintainer and CI?
> >
> 
> Hi,
> 
> I'm sure the community will be happy to merge the driver back into the
> repository.
> 

The other reason for its removal was its inability to meet the minimum feature
set required for Cinder drivers along with benchmarks showing the LVM and iSCSI
driver could be tweaked to have similar or better performance.

The other option would be to not use Cinder volumes so you just use local
storage on your compute nodes.

Readding the block device driver is not likely an option.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Gorka Eguileor
On 16/07, Rambo wrote:
> Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
> solution for data processing scenarios.Does the community will agree to merge 
> the BlockDeviceDriver to the Cinder repository again if our company hold the 
> maintainer and CI?
>

Hi,

I'm sure the community will be happy to merge the driver back into the
repository.

Still, I would recommend you looking at the "How To Contribute a driver to
Cinder" guide [1] and the "Third Party CI Requirement Policy"
documentation [2], and then adding this topic to Wednesday's meeting [3]
and go to the meeting to ensure that everybody is on board with it.

Best regards,
Gorka.


[1]: https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver
[2]: https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
[3]: https://etherpad.openstack.org/p/cinder-rocky-meeting-agendas

>
> -- Original --
> From: "Gorka Eguileor";
> Date: 2018年7月16日(星期一) 下午5:20
> To: "OpenStack Developmen";
> Subject: Re: [openstack-dev] [cinder] about block device driver
>
>
> On 16/07, Rambo wrote:
> > Hi,all
> >
> >
> >  In the Cinder repository, I noticed that the BlockDeviceDriver driver 
> > is being deprecated, and was eventually be removed with the Queens release.
> >
> >
> > https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
> >
> >
> > In my use case, the instances using Cinder perform intense I/O, thus iSCSI 
> > or LVM is not a viable option - benchmarked them several times, since Juno, 
> > unsatisfactory results.For data processing scenarios is always better to 
> > use local storage than any SAN/NAS solution.
> >
> >
> > So I felt a great need to know why we deprecated it.If there has any better 
> > one to replace it? What do you suggest to use once BlockDeviceDriver is 
> > removed?Can you tell me about this?Thank you very much!
> >
> > Best Regards
> > Rambo
>
> Hi,
>
> If I remember correctly the driver was deprecated because it had no
> maintainer or CI.  In Cinder we require our drivers to have both,
> otherwise we can't guarantee that they actually work or that anyone will
> fix it if it gets broken.
>
> Cheers,
> Gorka.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Rambo
Well,in my opinion,the BlockDeviceDriver is more suitable than any other 
solution for data processing scenarios.Does the community will agree to merge 
the BlockDeviceDriver to the Cinder repository again if our company hold the 
maintainer and CI?
 
 
-- Original --
From: "Gorka Eguileor"; 
Date: 2018年7月16日(星期一) 下午5:20
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [cinder] about block device driver

 
On 16/07, Rambo wrote:
> Hi,all
>
>
>  In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
> being deprecated, and was eventually be removed with the Queens release.
>
>
> https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
>
>
> In my use case, the instances using Cinder perform intense I/O, thus iSCSI or 
> LVM is not a viable option - benchmarked them several times, since Juno, 
> unsatisfactory results.For data processing scenarios is always better to use 
> local storage than any SAN/NAS solution.
>
>
> So I felt a great need to know why we deprecated it.If there has any better 
> one to replace it? What do you suggest to use once BlockDeviceDriver is 
> removed?Can you tell me about this?Thank you very much!
>
> Best Regards
> Rambo

Hi,

If I remember correctly the driver was deprecated because it had no
maintainer or CI.  In Cinder we require our drivers to have both,
otherwise we can't guarantee that they actually work or that anyone will
fix it if it gets broken.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] about block device driver

2018-07-16 Thread Gorka Eguileor
On 16/07, Rambo wrote:
> Hi,all
>
>
>  In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
> being deprecated, and was eventually be removed with the Queens release.
>
>
> https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
>
>
> In my use case, the instances using Cinder perform intense I/O, thus iSCSI or 
> LVM is not a viable option - benchmarked them several times, since Juno, 
> unsatisfactory results.For data processing scenarios is always better to use 
> local storage than any SAN/NAS solution.
>
>
> So I felt a great need to know why we deprecated it.If there has any better 
> one to replace it? What do you suggest to use once BlockDeviceDriver is 
> removed?Can you tell me about this?Thank you very much!
>
> Best Regards
> Rambo

Hi,

If I remember correctly the driver was deprecated because it had no
maintainer or CI.  In Cinder we require our drivers to have both,
otherwise we can't guarantee that they actually work or that anyone will
fix it if it gets broken.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] about block device driver

2018-07-16 Thread Rambo
Hi,all


 In the Cinder repository, I noticed that the BlockDeviceDriver driver is 
being deprecated, and was eventually be removed with the Queens release.


https://github.com/openstack/cinder/blob/stable/ocata/cinder/volume/drivers/block_device.py
 


In my use case, the instances using Cinder perform intense I/O, thus iSCSI or 
LVM is not a viable option - benchmarked them several times, since Juno, 
unsatisfactory results.For data processing scenarios is always better to use 
local storage than any SAN/NAS solution.


So I felt a great need to know why we deprecated it.If there has any better one 
to replace it? What do you suggest to use once BlockDeviceDriver is removed?Can 
you tell me about this?Thank you very much!
















Best Regards
Rambo__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev