Hello-
I've recently upgraded from OEL 6.4 to OEL 6.8. Infiniband Hardware
installed on the server prevents me from upgrading the OS version any
higher.
Kernel Version:
Oracle Linux Server Red Hat Compatible Kernel (2.6.32-642.el6.x86_64)
Since the upgrade, there have been
On Wed, Mar 07, 2018 at 04:42:00PM -0600, Steve Keller Savvco wrote:
> Hello-
>
> I've recently upgraded from OEL 6.4 to OEL 6.8. Infiniband Hardware
> installed on the server prevents me from upgrading the OS version any
> higher.
[...]
> Running "libguestfs-test-tool" with Kernel
PROBLEMS:
- Target cluster defaults to "Default".
- Using Insecure = True, is that bad?
- -of qcow2 does not work, with multiple problems
- Need to attach disks to VMs somehow
This adds a new output mode to virt-v2v. virt-v2v -o rhv-upload
streams images directly to an oVirt or RHV >= 4 Data
Simple code motion.
---
v2v/input_libvirt_vddk.ml | 10 +-
v2v/utils.ml | 9 +
v2v/utils.mli | 4
3 files changed, 14 insertions(+), 9 deletions(-)
diff --git a/v2v/input_libvirt_vddk.ml b/v2v/input_libvirt_vddk.ml
index 36efdb260..1e1f5b6bd 100644
Mainly minor fixes and code cleanups over the v4 patch.
There are still several problems with this patch, but it is in a
reviewable state, especially the Python code.
Rich.
___
Libguestfs mailing list
Libguestfs@redhat.com
Currently unused, in a future commit this will allow you to pass in a
password to be used when connecting to the target hypervisor.
---
v2v/cmdline.ml | 18 ++
v2v/test-v2v-docs.sh | 2 +-
v2v/virt-v2v.pod | 7 +++
3 files changed, 26 insertions(+), 1 deletion(-)
Without this extra element, oVirt will crash with a Java
NullPointerException (see https://bugzilla.redhat.com/1550123).
Fixes commit dac5fc53acdd1e51be2957c67e1e063e2132e680.
---
v2v/create_ovf.ml | 6 ++
1 file changed, 6 insertions(+)
diff --git a/v2v/create_ovf.ml b/v2v/create_ovf.ml
Here you go, from the newest version:
*IMPORTANT NOTICE
*
* When reporting bugs, include the COMPLETE, UNEDITED
* output below in your bug report.
*
On Wed, Mar 7, 2018 at 12:18 AM Richard W.M. Jones
wrote:
> Previous versions:
> v3: https://www.redhat.com/archives/libguestfs/2018-March/msg0.html
> v2:
> https://www.redhat.com/archives/libguestfs/2018-February/msg00177.html
> v1:
>
On Thu, Mar 08, 2018 at 12:13:01PM +, Nir Soffer wrote:
> On Wed, Mar 7, 2018 at 12:18 AM Richard W.M. Jones
> wrote:
>
> > Previous versions:
> > v3: https://www.redhat.com/archives/libguestfs/2018-March/msg0.html
> > v2:
> >
On Thu, Mar 08, 2018 at 02:31:48PM +, Nir Soffer wrote:
> When you create a disk using qcow2 format ovirt creates a qcow2 empty image
> with the specified virtual size, so we should work.
Ah didn't know that. That nicely solves #2.
Rich.
--
Richard Jones, Virtualization Group, Red Hat
בתאריך יום ה׳, 8 במרץ 2018, 14:29, מאת Richard W.M. Jones <
rjo...@redhat.com>:
> On Thu, Mar 08, 2018 at 12:13:01PM +, Nir Soffer wrote:
> > On Wed, Mar 7, 2018 at 12:18 AM Richard W.M. Jones
> > wrote:
> >
> > > Previous versions:
> > > v3:
>
Thanks for the update. I'll give that a try and let you know.
-Original Message-
From: Richard W.M. Jones [mailto:rjo...@redhat.com]
Sent: Thursday, March 8, 2018 3:14 PM
To: Steve Keller Savvco
Cc: libguestfs@redhat.com
Subject: Re: [Libguestfs] febootstrap:
If the target server supports FUA, then we should pass the client
flag through rather than emulating things with a less-efficient
flush.
Signed-off-by: Eric Blake
---
v3: rebase to API changes
---
plugins/nbd/nbd.c | 42 +-
1 file
After more than a month since v2 [1], I've finally got my FUA
support series polished. This is all of my outstanding patches,
even though some of them were originally posted in separate
threads from the original FUA post [2], [3]
[1]
We already added a .can_fua callback internally; it is now time to
expose it to the filters, and to update the particular filters that
can perform more efficient FUA when subdividing requests.
Note that this is a change to the filter API; but given that we've
already bumped the API in a previous
The NBD protocol supports Forced Unit Access (FUA) as a more efficient
way to wait for just one write to land in persistent storage, rather
than all outstanding writes at the time of a flush; modeled after
the kernel's block I/O flag of the same name. While we can emulate
the proper semantics
While our plugin code always advertises WRITE_ZEROES on writable
connections (because we can emulate .zero by calling .pwrite),
and FUA support when .flush exists (because we can emulate the
persistence by calling .flush), it is conceivable that a filter
may want to explicitly avoid advertising
The upstream NBD protocol recently clarified that servers can
advertise block size limitations to clients that ask with
NBD_OPT_GO (although we're still a ways off from implementing
that in nbdkit); and that in the absence of that, then clients
should agree on limits using out-of-band information
There is no need to have all of the .c files under src/ include
nbdkit-plugin.h, since internal.h already handles this. Also,
this will make it easier when an upcoming patch updates the
public header to respond to a user-defined selection between
API levels; we want all of the files in src/ to
It's time to expose the full semantics already in use by the backend
to our filters, by exposing flags and an explicit parameter for
tracking the error value to return to the client. This is an
incompatible API change, and all existing filters are updated to
match the new semantics.
Of note:
Sometimes, it's nice to see what a difference it makes in timing
based on whether Forced Unit Access or a full flush is used when
waiting for particular data to land in persistent storage. Add a
new filter that makes it trivial to switch between different FUA
policies (none: the client must use
Previously, we let a plugin set an error in either thread-local
storage (nbdkit_set_error()) or errno, then connections.c would
decode which error to use. But with filters in the mix, it is
very difficult for a filter to know what error was set by the
plugin (particularly since nbdkit_set_error()
Recent patches clarified documentation to point out that within
the life of a single connection, the .can_FOO helpers should
return consistent results, and that callers may cache those
results. But at least in the case of .can_fua, we aren't really
caching things; depending on the overhead
While our plugin code always advertises WRITE_ZEROES on writable
connections (because we can emulate .zero by calling .pwrite),
it is conceivable that a filter may want to explicitly avoid
advertising particular bits. More to the point, an upcoming
patch will add a 'nozero' filter that hides
If we bump NBDKIT_API_VERSION, we have forcefully made older
nbdkit reject all plugins that opt to the newer API:
$ nbdkit ./plugins/nbd/.libs/nbdkit-nbd-plugin.so --dump-plugin
nbdkit: ./plugins/nbd/.libs/nbdkit-nbd-plugin.so: plugin is incompatible with
this version of nbdkit (_api_version =
Sometimes, it's nice to see what a difference it makes in timing
or in destination file size when sparse handling is enabled or
disabled. Add a new filter that makes it trivial to disable
write zero support, both at the client side (the feature is not
advertised, so the client must do fallbacks)
'nbdkit -v' is quite verbose, and traces everything. Well, actually
it only traces if you also use -f; because if we daemonize, stderr
is redirected to /dev/null. Sometimes, we want to trace JUST the
client interactions, and it would be nice to trace this even when
run in the background. In
Since the null plugin already does nothing during writes, it can be
argued that those writes have already landed on persistent storage
(we know that we will read back zeros without any further delays).
Instead of advertising that the client cannot flush or FUA, we can
enable a few more callbacks
On Thu, Mar 08, 2018 at 09:41:38AM -0600, Eric Blake wrote:
> On 03/08/2018 06:29 AM, Richard W.M. Jones wrote:
>
> >NBD (the protocol) doesn't "know" about qcow2 files. You can serve
> >any file you want as a range of bytes, including qcow2, but that
> >requires whatever is consuming those
On 03/08/2018 06:29 AM, Richard W.M. Jones wrote:
NBD (the protocol) doesn't "know" about qcow2 files. You can serve
any file you want as a range of bytes, including qcow2, but that
requires whatever is consuming those bytes to then do the qcow2
en-/decoding. (Which means effectively the
31 matches
Mail list logo