Hi,

Thanks for the reply, actually approach one is not we are looking for, our
demands is to attach the real physical volume from compute node to VMs,
by this way we can achieve the performance we need for usecases such as big
data, this can be done by cinder using BlockDeviceDriver, it is quite
different from the approach one you mentioned. The only problem now is that
we cannot practially ensure the compute resource located on the same
host with the volume, as Matt mentioned above, currently we have to arrange
1:1 AZ in Cinder and Nova to do this and it is not practical in commercial
deployments.

Thanks.

On Mon, Sep 26, 2016 at 9:48 PM, Erlon Cruz <sombra...@gmail.com> wrote:

>
>
> On Fri, Sep 23, 2016 at 10:19 PM, Zhenyu Zheng <zhengzhenyul...@gmail.com>
> wrote:
>
>> Hi,
>>
>> Thanks all for the information, as for the filter Erlon(
>> InstanceLocalityFilter) mentioned, this only solves a part of the
>> problem,
>> we can create new volumes for existing instances using this filter and
>> then attach to it, but the root volume still cannot
>> be guranteed to be on the same host as the compute resource, right?
>>
>>
> You have two options to use a disk in the same node as the instance.
> 1 - The easiest, just don't use Cinder volumes. When you create an
> instance from an image, the default behavior in Nova, is to create the root
> disk in the local host (/var/lib/nova/instances). This have the advantage
> that Nova will cache the image locally and will avoid the need of copying
> the image over the wire (or having to configure image caching in Cinder).
>
> 2 - Use Cinder volumes as root disk. Nova will somehow have to pass the
> hints to the scheduler so it properly can use the InstanceLocalityFilter.
> If you place this in Nova, and make sure that all requests have the proper
> hint, then the volumes created will be scheduled and the host.
>
> Is there any reason why you can't use the first approach?
>
>
>
>
>> The idea here is that all the volumes uses local disks.
>> I was wondering if we already have such a plan after the Resource
>> Provider structure has accomplished?
>>
>> Thanks
>>
>> On Sat, Sep 24, 2016 at 2:05 AM, Erlon Cruz <sombra...@gmail.com> wrote:
>>
>>> Not sure exactly what you mean, but in Cinder using the
>>> InstanceLocalityFilter[1], you can  schedule a volume to the same compute
>>> node the instance is located. Is this what you need?
>>>
>>> [1] http://docs.openstack.org/developer/cinder/scheduler-fil
>>> ters.html#instancelocalityfilter
>>>
>>> On Fri, Sep 23, 2016 at 12:19 PM, Jay S. Bryant <
>>> jsbry...@electronicjungle.net> wrote:
>>>
>>>> Kevin,
>>>>
>>>> This is functionality that has been requested in the past but has never
>>>> been implemented.
>>>>
>>>> The best way to proceed would likely be to propose a blueprint/spec for
>>>> this and start working this through that.
>>>>
>>>> -Jay
>>>>
>>>>
>>>> On 09/23/2016 02:51 AM, Zhenyu Zheng wrote:
>>>>
>>>> Hi Novaers and Cinders:
>>>>
>>>> Quite often application requirements would demand using locally
>>>> attached disks (or direct attached disks) for OpenStack compute instances.
>>>> One such example is running virtual hadoop clusters via OpenStack.
>>>>
>>>> We can now achieve this by using BlockDeviceDriver as Cinder driver and
>>>> using AZ in Nova and Cinder, illustrated in[1], which is not very feasible
>>>> in large scale production deployment.
>>>>
>>>> Now that Nova is working on resource provider trying to build an
>>>> generic-resource-pool, is it possible to perform "volume-based-scheduling"
>>>> to build instances according to volume? As this could be much easier to
>>>> build instances like mentioned above.
>>>>
>>>> Or do we have any other ways of doing this?
>>>>
>>>> References:
>>>> [1] http://cloudgeekz.com/71/how-to-setup-openstack-to-use-l
>>>> ocal-disks-for-instances.html
>>>>
>>>> Thanks,
>>>>
>>>> Kevin Zheng
>>>>
>>>>
>>>> __________________________________________________________________________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: 
>>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>>
>>>> ____________________________________________________________
>>>> ______________
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.op
>>>> enstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>>
>>>
>>> ____________________________________________________________
>>> ______________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to