On Wed, Sep 21, 2016 at 12:57 AM, Michał Dulko <michal.du...@intel.com>
> On 09/20/2016 05:48 PM, John Griffith wrote:
> > On Tue, Sep 20, 2016 at 9:06 AM, Duncan Thomas
> > <duncan.tho...@gmail.com <mailto:duncan.tho...@gmail.com>> wrote:
> > On 20 September 2016 at 16:24, Nikita Konovalov
> > <nkonova...@mirantis.com <mailto:nkonova...@mirantis.com>> wrote:
> > Hi,
> > From Sahara (and Hadoop workload in general) use-case the
> > reason we used BDD was a complete absence of any overhead on
> > compute resources utilization.
> > The results show that the LVM+Local target perform pretty
> > close to BDD in synthetic tests. It's a good sign for LVM. It
> > actually shows that most of the storage virtualization
> > overhead is not caused by LVM partitions and drivers
> > themselves but rather by the iSCSI daemons.
> > So I would still like to have the ability to attach partitions
> > locally bypassing the iSCSI to guarantee 2 things:
> > * Make sure that lio processes do not compete for CPU and RAM
> > with VMs running on the same host.
> > * Make sure that CPU intensive VMs (or whatever else is
> > running nearby) are not blocking the storage.
> > So these are, unless we see the effects via benchmarks, completely
> > meaningless requirements. Ivan's initial benchmarks suggest
> > that LVM+LIO is pretty much close enough to BDD even with iSCSI
> > involved. If you're aware of a case where it isn't, the first
> > thing to do is to provide proof via a reproducible benchmark.
> > Otherwise we are likely to proceed, as John suggests, with the
> > assumption that local target does not provide much benefit.
> > I've a few benchmarks myself that I suspect will find areas where
> > getting rid of iSCSI is benefit, however if you have any then you
> > really need to step up and provide the evidence. Relying on vague
> > claims of overhead is now proven to not be a good idea.
> > ____________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <http://openstack-dev-requ...@lists.openstack.org?subject:
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> > Honestly we can have both, I'll work up a bp to resurrect the idea of
> > a "smart" scheduling feature that lets you request the volume be on
> > the same node as the compute node and use it directly, and then if
> > it's NOT it will attach a target and use it that way (in other words
> > you run a stripped down c-vol service on each compute node).
> Don't we have at least scheduling problem solved  already?
Yes, that is a sizeable chunk of the solution. The remaining components
are how to coordinate with Nova (compute nodes) and figuring out if we just
use c-vol as is, or if we come up with some form of a paired down agent.
Just using c-vol as a start might be the best way to go.
> > Sahara keeps insisting on being a snow-flake with Cinder volumes and
> > the block driver, it's really not necessary. I think we can
> > compromise just a little both ways, give you standard Cinder semantics
> > for volumes, but allow you direct acccess to them if/when requested,
> > but have those be flexible enough that targets *can* be attached so
> > they meet all of the required functionality and API implementations.
> > This also means that we don't have to continue having a *special*
> > driver in Cinder that frankly only works for one specific use case and
> > deployment.
> > I've pointed to this a number of times but it never seems to
> > resonate... but I never learn so I'll try it once again . Note
> > that was before the name "brick" was hijacked and now means something
> > completely different.
> > : https://wiki.openstack.org/wiki/CinderBrick
> > Thanks,
> > John
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
OpenStack Development Mailing List (not for usage questions)