On 06/23/2016 Daniel Berrange wrote (lost attribution in thread):
> Our long term goal is that 100% of all network storage will be connected
>
to directly by QEMU. We already have the ability to partially do this with
> iSCSI, but it is lacking support for multipath. As & when that gap is
>
Daniel, Thanks. Looking for a sense of direction.
Clearly there is some range of opinion, as Walter indicates. :)
Not sure you are get to 100% direct connection to QEMU. When there is
dedicated hardware to do off-board processing of the connection to storage,
you might(?) be stuck routing
Comments inline.
On Thu, Jun 16, 2016 at 10:13 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com
> wrote:
> On 6/16/2016 6:12 AM, Preston L. Bannister wrote:
>
>> I am hoping support for instance quiesce in the Nova API makes it into
>> OpenStack. To my understanding, t
I am hoping support for instance quiesce in the Nova API makes it into
OpenStack. To my understanding, this is existing function in Nova, just
not-yet exposed in the public API. (I believe Cinder uses this via a
private Nova API.)
Much of the discussion is around disaster recovery (DR) and NFV -
QEMU has the ability to directly connect to iSCSI volumes. Running the
iSCSI connections through the nova-compute host *seems* somewhat
inefficient.
There is a spec/blueprint and implementation that landed in Kilo:
ack clouds on Ubuntu might be a much
>> better bet.
>>
>>
>> Are those claimed improvements in QEMU and the Linux kernel going to make
>> a
>> difference in my measured result? I do not know. Still reading, building
>> tests,
>> and collecting measures...
&g
n <chris.frie...@windriver.com>
wrote:
> On 03/03/2016 01:13 PM, Preston L. Bannister wrote:
>
> > Scanning the same volume from within the instance still gets the same
>> > ~450MB/s that I saw before.
>>
>> Hmmm, with iSCSI inbetween that could be
Note that my end goal is to benchmark an application that runs in an
instance that does primarily large sequential full-volume-reads.
On this path I ran into unexpectedly poor performance within the instance.
If this is a common characteristic of OpenStack, then this becomes a
question of concern
cy?
The in-instance "dd" CPU use is ~12%. (Not very interesting.)
Not sure from where the (apparent) latency comes. The host iSCSI target?
The QEMU iSCSI initiator? Onwards...
On Tue, Mar 1, 2016 at 5:13 PM, Rick Jones <rick.jon...@hpe.com> wrote:
> On 03/01
I have need to benchmark volume-read performance of an application running
in an instance, assuming extremely fast storage.
To simulate fast storage, I have an AIO install of OpenStack, with local
flash disks. Cinder LVM volumes are striped across three flash drives (what
I have in the present
I have need to benchmark volume-read performance of an application running
in an instance, assuming extremely fast storage.
To simulate fast storage, I have an AIO install of OpenStack, with local
flash disks. Cinder LVM volumes are striped across three flash drives (what
I have in the present
On Wed, Feb 3, 2016 at 6:32 AM, Sam Yaple wrote:
> [snip]
>
Full backups are costly in terms of IO, storage, bandwidth and time. A full
> backup being required in a backup plan is a big problem for backups when we
> talk about volumes that are terabytes large.
>
As an
On a side note, of the folk with interest in this thread, how many are
going to the Austin OpenStack conference? Would you be interested in
presenting as a panel?
I submitted for a presentation on "State of the Art for in-Cloud backup of
high-value Applications". Notion is to give context for the
To be clear, I work for EMC, and we are building a backup product for
OpenStack (which at this point is very far along). The primary lack is a
good means to efficiently extract changed-block information from OpenStack.
About a year ago I worked through the entire Nova/Cinder/libvirt/QEMU
stack, to
reading the QEMU mailing list and
source code to figure out which bits were real. :)
On Tue, Feb 2, 2016 at 4:04 AM, Preston L. Bannister <pres...@bannister.us>
wrote:
> To be clear, I work for EMC, and we are building a backup product for
> OpenStack (which at this point is ve
at's the best for a single Company. If this vision is not shared, then,
> unfortunately, good luck competing, while if the vision is shared... let's
> do together unprecedented things.
>
> Many thanks,
> Fausto
>
>
> On Sun, Jan 31, 2016 at 1:01 AM, Preston L. Bannister
Seems to me there are three threads here.
The Freezer folk were given a task, and did the best possible to support
backup given what OpenStack allowed. To date, OpenStack is simply not very
good at supporting backup as a service. (Apologies to the Freezer folk if I
misinterpreted.)
The patches
In the implementation of a instance backup service for OpenStack, on
restore I need to (re)create the restored instance in the original tenant.
Restores can be fired off by an administrator (not the original user), so
at instance-create time I have two main choices:
1. Create the instance as
John,
As a (new) OpenStack developer, I just discovered the
CINDER_SECURE_DELETE option.
As an *implicit* default, I entirely approve. Production OpenStack
installations should *absolutely* insure there is no information leakage
from one instance to the next.
As an *explicit* default, I am not
On Thu, Oct 23, 2014 at 7:51 AM, John Griffith john.griffi...@gmail.com
wrote:
On Thu, Oct 23, 2014 at 8:50 AM, John Griffith john.griffi...@gmail.com
wrote:
On Thu, Oct 23, 2014 at 1:30 AM, Preston L. Bannister
pres...@bannister.us wrote:
John,
As a (new) OpenStack developer, I just
.
On Thu, Oct 23, 2014 at 3:44 PM, Preston L. Bannister
pres...@bannister.us wrote:
Yes, that is pretty much the key.
Does LVM let you read physical blocks that have never been written? Or
zero out virgin segments on read? If not, then dd of zeroes is a way of
doing the right thing (if *very
As a side-note, the new AWS flavors seem to indicate that the Amazon
infrastructure is moving to all ECS volumes (and all flash, possibly), both
ephemeral and not. This makes sense, as fewer code paths and less
interoperability complexity is a good thing.
That the same balance of concerns should
provide some of the same benefits.
On 10/21/2014 02:54 PM, Preston L. Bannister wrote:
As a side-note, the new AWS flavors seem to indicate that the Amazon
infrastructure is moving to all ECS volumes (and all flash, possibly), both
ephemeral and not. This makes sense, as fewer code paths
OK, I am fairly new here (to OpenStack). Maybe I am missing something. Or
not.
Have a DevStack, running in a VM (VirtualBox), backed by a single flash
drive (on my current generation MacBook). Could be I have something off in
my setup.
Testing nova backup - first the existing implementation,
rather than copying full
volumes/images, disabling wipe on delete, etc.
Thanks,
Avishay
On Sun, Oct 19, 2014 at 1:41 PM, Preston L. Bannister
pres...@bannister.us wrote:
OK, I am fairly new here (to OpenStack). Maybe I am missing something. Or
not.
Have a DevStack, running in a VM
in here. Some comments inline, but tl;dr
my answer is yes, we need to be doing a much better job thinking about how
I/O intensive operations affect other things running on providers of
compute and block storage resources
On 10/19/2014 06:41 AM, Preston L. Bannister wrote:
OK, I am fairly new here
Too-short token expiration times are one of my concerns, in my current
exercise.
Working on a replacement for Nova backup. Basically creating backups jobs,
writing the jobs into a queue, with a background worker that reads jobs
from the queue. Tokens could expire while the jobs are in the queue
Sorry, I am jumping into this without enough context, but ...
On Wed, Sep 24, 2014 at 8:37 PM, Qiming Teng teng...@linux.vnet.ibm.com
wrote:
mysql select count(*) from metadata_text;
+--+
| count(*) |
+--+
| 25249913 |
+--+
1 row in set (3.83 sec)
There are
This is great. On the point of:
If an Incomplete bug has no response after 30 days it's fair game to
close (Invalid, Opinion, Won't Fix).
How about Stale ... since that is where it is. (How hard to add a state?)
On Fri, Sep 19, 2014 at 6:13 AM, Sean Dague s...@dague.net wrote:
I've spent
tend to say 2) is the best option. There are many open source or
commercial backup software, and both for VMs and volume.
If we do option 1), it reminds me to implement something similar to VMware
method, and it will cause nova really heavy.
On Sun, Aug 31, 2014 at 4:04 AM, Preston L. Bannister
into the real backup solution.
On Sat, Aug 30, 2014 at 1:14 PM, Preston L. Bannister
pres...@bannister.us wrote:
The current backup APIs in OpenStack do not really make sense (and
apparently do not work ... which perhaps says something about usage and
usability). So in that sense, they could be removed
Looking to put a proper implementation of instance backup into OpenStack.
Started by writing a simple set of baseline tests and running against the
stable/icehouse branch. They failed!
https://github.com/dreadedhill-work/openstack-backup-scripts
Scripts and configuration are in the above. Simple
needs an API.
On Fri, Aug 29, 2014 at 11:16 AM, Jay Pipes jaypi...@gmail.com wrote:
On 08/29/2014 02:48 AM, Preston L. Bannister wrote:
Looking to put a proper implementation of instance backup into
OpenStack. Started by writing a simple set of baseline tests and running
against the stable
Did this ever go anywhere?
http://lists.openstack.org/pipermail/openstack-dev/2014-January/024315.html
Looking at what is needed to get backup working in OpenStack, and this
seems the most recent reference.
___
OpenStack-dev mailing list
34 matches
Mail list logo