On 18/04/2019 00.01, Stephen Checkoway wrote:
> During a sector erase (but not a chip erase), the embeded erase program
> can be suspended. Once suspended, the sectors not selected for erasure
> may be read and programmed. Autoselect mode is allowed during erase
> suspend mode. Presumably, CFI
On 18/04/2019 00.01, Stephen Checkoway wrote:
> After two unlock cycles and a sector erase command, the AMD flash chips
> start a 50 us erase time out. Any additional sector erase commands add a
> sector to be erased and restart the 50 us timeout. During the timeout,
> status bit DQ3 is cleared.
On 18/04/2019 00.01, Stephen Checkoway wrote:
> Test the AMD command set for parallel flash chips. This test uses an
> ARM musicpal board with a pflash drive to test the following list of
> currently-supported commands.
> - Autoselect
> - CFI
> - Sector erase
> - Chip erase
> - Program
> - Unlock
On 18/04/2019 00.01, Stephen Checkoway wrote:
> After a flash device enters CFI mode from autoselect mode, the reset
> command returns the device to autoselect mode. An additional reset
> command is necessary to return to read array mode.
>
> Signed-off-by: Stephen Checkoway
> ---
[...]
> diff
On 18/04/2019 00.01, Stephen Checkoway wrote:
> Some flash chips support sectors of different sizes. For example, the
> AMD AM29LV160DT has 31 64 kB sectors, one 32 kB sector, two 8 kB
> sectors, and a 16 kB sector, in that order. The AM29LV160DB has those in
> the reverse order.
>
> The
On 18/04/2019 00.01, Stephen Checkoway wrote:
> It's common for multiple narrow flash chips to be hooked up in parallel
> to support wider buses. For example, four 8-bit wide flash chips (x8)
> may be combined in parallel to produce a 32-bit wide device. Similarly,
> two 16-bit wide chips (x16)
On 18/04/2019 00.01, Stephen Checkoway wrote:
> Most AMD commands only examine 11 bits of the address. This masks the
> addresses used in the comparison to 11 bits. The exceptions are word or
> sector addresses which use offset directly rather than the shifted
> offset, boff.
>
> Signed-off-by:
This just about rewrites the entirety of the bitmaps.rst document to
make it consistent with the 4.0 release. I have added new features seen
in the 4.0 release, as well as tried to clarify some points that keep
coming up when discussing this feature both in-house and upstream.
Yes, it's a lot
When erasing the chip, use the typical time specified in the CFI table
rather than arbitrarily selecting 5 seconds.
Since the currently unconfigurable value set in the table is 12, this
means a chip erase takes 4096 ms so this isn't a big change in behavior.
Signed-off-by: Stephen Checkoway
---
During a sector erase (but not a chip erase), the embeded erase program
can be suspended. Once suspended, the sectors not selected for erasure
may be read and programmed. Autoselect mode is allowed during erase
suspend mode. Presumably, CFI queries are similarly allowed so this
commit allows them
Simplify and refactor for upcoming commits. In particular, pull out all
of the code to modify the status into simple helper functions. Status
handling becomes more complex once multiple chips are interleaved to
produce a single device.
No change in functionality is intended with this commit.
Most AMD commands only examine 11 bits of the address. This masks the
addresses used in the comparison to 11 bits. The exceptions are word or
sector addresses which use offset directly rather than the shifted
offset, boff.
Signed-off-by: Stephen Checkoway
---
hw/block/pflash_cfi02.c | 8
Some flash chips support sectors of different sizes. For example, the
AMD AM29LV160DT has 31 64 kB sectors, one 32 kB sector, two 8 kB
sectors, and a 16 kB sector, in that order. The AM29LV160DB has those in
the reverse order.
The `num-blocks` and `sector-length` properties work exactly as they
It's common for multiple narrow flash chips to be hooked up in parallel
to support wider buses. For example, four 8-bit wide flash chips (x8)
may be combined in parallel to produce a 32-bit wide device. Similarly,
two 16-bit wide chips (x16) may be combined.
This commit introduces `device-width`
After two unlock cycles and a sector erase command, the AMD flash chips
start a 50 us erase time out. Any additional sector erase commands add a
sector to be erased and restart the 50 us timeout. During the timeout,
status bit DQ3 is cleared. After the time out, DQ3 is asserted during
erasure.
After a flash device enters CFI mode from autoselect mode, the reset
command returns the device to autoselect mode. An additional reset
command is necessary to return to read array mode.
Signed-off-by: Stephen Checkoway
---
hw/block/pflash_cfi02.c | 21 +
Test the AMD command set for parallel flash chips. This test uses an
ARM musicpal board with a pflash drive to test the following list of
currently-supported commands.
- Autoselect
- CFI
- Sector erase
- Chip erase
- Program
- Unlock bypass
- Reset
Signed-off-by: Stephen Checkoway
---
When the flash device is performing a chip erase, all commands are
ignored. When it is performing a sector erase, only the erase suspend
command is valid, which is currently not supported.
In particular, the reset command should not cause the device to reset to
read array mode while programming
The goal of this patch series implement the following AMD command-set parallel
flash functionality:
- flash interleaving;
- nonuniform sector sizes;
- erase suspend/resume commands; and
- multi-sector erase.
During refactoring and implementation, I discovered several bugs that are
fixed here as
On Tue, 2019-04-16 at 15:50 +0200, Paolo Bonzini wrote:
> On 15/04/19 15:57, Maxim Levitsky wrote:
> >
> >
> > Hi!
> > These are few assorted fixes and features for the userspace
> > nvme driver.
> >
> > Tested that on my laptop with my Samsung X5 thunderbolt drive, which
> > happens to have 4K
Signed-off-by: Maxim Levitsky
---
block/nvme.c | 69 +++-
block/trace-events | 1 +
include/block/nvme.h | 19 +++-
3 files changed, 87 insertions(+), 2 deletions(-)
diff --git a/block/nvme.c b/block/nvme.c
index 0b1da54574..35b925899f
Signed-off-by: Maxim Levitsky
---
block/nvme.c | 80 ++
block/trace-events | 2 ++
2 files changed, 82 insertions(+)
diff --git a/block/nvme.c b/block/nvme.c
index 35b925899f..b83912c627 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -110,6
Currently the driver hardcodes the sector size to 512,
and doesn't check the underlying device
Also fail if underlying nvme device is formatted with metadata
as this needs special support.
Signed-off-by: Maxim Levitsky
---
block/nvme.c | 40 +++-
1 file
Phase bits are only set by the hardware to indicate new completions
and not by the device driver.
Signed-off-by: Maxim Levitsky
---
block/nvme.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/block/nvme.c b/block/nvme.c
index 0684bbd077..2d208000df 100644
--- a/block/nvme.c
+++
Hi!
These are few assorted fixes and features for the userspace
nvme driver.
Tested that on my laptop with my Samsung X5 thunderbolt drive, which
happens to have 4K sectors, support for discard and write zeros.
Also bunch of fixes sitting in my queue from the period when I developed
the
Fix the math involving non standard doorbell stride
Signed-off-by: Maxim Levitsky
---
block/nvme.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/nvme.c b/block/nvme.c
index 2d208000df..208242cf1f 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -216,7 +216,7 @@ static
Command line help explicitly requested by the user should be printed
to stdout, not stderr. We do elsewhere. Adjust -drive to match: use
qemu_printf() instead of error_printf(). Plain printf() would be
wrong because we need to print to the current monitor for "drive_add
... format=help".
Cc:
error_exit() uses low-level error_printf() to report errors.
Modernize it to use error_vreport().
Cc: Kevin Wolf
Cc: Max Reitz
Cc: qemu-block@nongnu.org
Signed-off-by: Markus Armbruster
Reviewed-by: Eric Blake
---
qemu-img.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff
Callbacks ssh_co_readv(), ssh_co_writev(), ssh_co_flush() report
errors to the user with error_printf(). They shouldn't, it's their
caller's job. Replace by a suitable trace point. While there, drop
the unreachable !s->sftp case.
Perhaps we should convert this part of the block driver
On 17.04.19 19:11, Eric Blake wrote:
> Disk sizes close to INT64_MAX cause overflow, for some pretty
> ridiculous output:
>
> $ ./nbdkit -U - memory size=$((2**63 - 512)) --run 'qemu-img info $nbd'
> image: nbd+unix://?socket=/tmp/nbdkitHSAzNz/socket
> file format: raw
> virtual size:
On Wed, Apr 17, 2019 at 11:58:57AM -0500, Eric Blake wrote:
> On 4/10/19 2:28 PM, Max Reitz wrote:
> > On 01.04.19 16:57, Eric Blake wrote:
> >> Disk sizes close to INT64_MAX cause overflow, for some pretty
> >> ridiculous output:
> >>
> >> $ ./nbdkit -U - memory size=$((2**63 - 512)) --run
On 17.04.19 19:11, Eric Blake wrote:
> When extracting a human-readable size formatter, we changed 'uint64_t
> div' pre-patch to 'unsigned long div' post-patch. Which breaks on
> 32-bit platforms, resulting in 'inf' instead of intended values larger
> than 999GB.
>
> Fixes: 22951aaa
> CC:
On 17.04.19 18:58, Eric Blake wrote:
> On 4/10/19 2:28 PM, Max Reitz wrote:
>> On 01.04.19 16:57, Eric Blake wrote:
>>> Disk sizes close to INT64_MAX cause overflow, for some pretty
>>> ridiculous output:
>>>
>>> $ ./nbdkit -U - memory size=$((2**63 - 512)) --run 'qemu-img info $nbd'
>>>
Disk sizes close to INT64_MAX cause overflow, for some pretty
ridiculous output:
$ ./nbdkit -U - memory size=$((2**63 - 512)) --run 'qemu-img info $nbd'
image: nbd+unix://?socket=/tmp/nbdkitHSAzNz/socket
file format: raw
virtual size: -8388607T (9223372036854775296 bytes)
disk size:
When extracting a human-readable size formatter, we changed 'uint64_t
div' pre-patch to 'unsigned long div' post-patch. Which breaks on
32-bit platforms, resulting in 'inf' instead of intended values larger
than 999GB.
Fixes: 22951aaa
CC: qemu-sta...@nongnu.org
Reported-by: Max Reitz
since v2: Fix problems pointed out by Max:
vmdk (test 59) output had not actually been tested
32-bit builds of size_to_str() have been been broken since 2.10
Eric Blake (2):
cutils: Fix size_to_str() on 32-bit platforms
qemu-img: Saner printing of large file sizes
block/qapi.c
On 4/10/19 2:28 PM, Max Reitz wrote:
> On 01.04.19 16:57, Eric Blake wrote:
>> Disk sizes close to INT64_MAX cause overflow, for some pretty
>> ridiculous output:
>>
>> $ ./nbdkit -U - memory size=$((2**63 - 512)) --run 'qemu-img info $nbd'
>> image: nbd+unix://?socket=/tmp/nbdkitHSAzNz/socket
On 17.04.19 17:48, Kevin Wolf wrote:
> Even for block nodes with bs->drv == NULL, we can't just ignore a
> bdrv_set_aio_context() call. Leaving the node in its old context can
> mean that it's still in an iothread context in bdrv_close_all() during
> shutdown, resulting in an attempted unlock of
On 16.04.19 10:13, Vladimir Sementsov-Ogievskiy wrote:
> 13.04.2019 19:53, Max Reitz wrote:
>> We do not support this combination (yet), so this should yield an error
>> message.
>>
>> Signed-off-by: Max Reitz
>
> Tested-by: Vladimir Sementsov-Ogievskiy
> [only -qcow2, as -nfs
On 16.04.19 10:02, Vladimir Sementsov-Ogievskiy wrote:
> 13.04.2019 19:53, Max Reitz wrote:
>> This test converts a simple image to another, but blkdebug injects
>> block_status and read faults at some offsets. The resulting image
>> should be the same as the input image, except that sectors that
On 16.04.19 09:18, Vladimir Sementsov-Ogievskiy wrote:
> 13.04.2019 19:53, Max Reitz wrote:
>> This new error option allows users of blkdebug to inject errors only on
>> certain kinds of I/O operations. Users usually want to make a very
>> specific operation fail, not just any; but right now they
On 16.04.19 12:02, Vladimir Sementsov-Ogievskiy wrote:
> 10.04.2019 23:20, Max Reitz wrote:
>> What bs->file and bs->backing mean depends on the node. For filter
>> nodes, both signify a node that will eventually receive all R/W
>> accesses. For format nodes, bs->file contains metadata and data,
On 4/17/19 10:48 AM, Kevin Wolf wrote:
> Even for block nodes with bs->drv == NULL, we can't just ignore a
> bdrv_set_aio_context() call. Leaving the node in its old context can
> mean that it's still in an iothread context in bdrv_close_all() during
> shutdown, resulting in an attempted unlock of
Even for block nodes with bs->drv == NULL, we can't just ignore a
bdrv_set_aio_context() call. Leaving the node in its old context can
mean that it's still in an iothread context in bdrv_close_all() during
shutdown, resulting in an attempted unlock of the AioContext lock which
we don't hold.
This
On 4/17/19 10:30 AM, Max Reitz wrote:
> 182 fails if qemu has no support for hotplugging of a virtio-blk device.
> Using an NBD server instead works just as well for the test, even on
> qemus without hotplugging support.
>
> Fixes: 6d0a4a0fb5c8f10c8eb68b52cfda0082b00ae963
> Reported-by: Danilo C.
182 fails if qemu has no support for hotplugging of a virtio-blk device.
Using an NBD server instead works just as well for the test, even on
qemus without hotplugging support.
Fixes: 6d0a4a0fb5c8f10c8eb68b52cfda0082b00ae963
Reported-by: Danilo C. L. de Paula
Signed-off-by: Max Reitz
---
To
On 4/17/19 5:09 AM, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy
> ---
> tests/qemu-iotests/249 | 69 ++
> tests/qemu-iotests/249.out | 30 +
> tests/qemu-iotests/group | 1 +
> 3 files changed, 100
On 4/17/19 5:09 AM, Vladimir Sementsov-Ogievskiy wrote:
> This fixes at least one overflow in qcow2_process_discards.
It's worth calling out how long the problem of passing >2G discard
requests has been present (my reply to the cover letter tracked down
0b919fae as tracking a 64-bit discard
On 4/17/19 5:09 AM, Vladimir Sementsov-Ogievskiy wrote:
> Hi all. We faced an interesting bug, which may be simply reproduced:
>
> prepare image:
> ./qemu-img create -f qcow2 -o cluster_size=1M /ssd/test 2300M
> ./qemu-io -c 'write 100M 2000M' -c 'write 2100M 200M' -c 'write 0 100M'
> /ssd/test
On Wed, Apr 17, 2019 at 12:54 PM Paolo Bonzini wrote:
> Linux places a limit of UIO_MAXIOV pages on SG_IO ioctls (and if the limit
> is exceeded, a confusing ENOMEM error is returned[1]). Prevent the guest
> from exceeding these limits, by capping the maximum transfer length to
> that value in
Linux places a limit of UIO_MAXIOV pages on SG_IO ioctls (and if the limit
is exceeded, a confusing ENOMEM error is returned[1]). Prevent the guest
from exceeding these limits, by capping the maximum transfer length to
that value in the block limits VPD page.
[1] Oh well, at least it was easier
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
tests/qemu-iotests/249 | 69 ++
tests/qemu-iotests/249.out | 30 +
tests/qemu-iotests/group | 1 +
3 files changed, 100 insertions(+)
create mode 100755 tests/qemu-iotests/249
create mode
This fixes at least one overflow in qcow2_process_discards.
Signed-off-by: Vladimir Sementsov-Ogievskiy
---
include/block/block.h | 4 ++--
block/io.c| 19 ++-
2 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/include/block/block.h
Hi all. We faced an interesting bug, which may be simply reproduced:
prepare image:
./qemu-img create -f qcow2 -o cluster_size=1M /ssd/test 2300M
./qemu-io -c 'write 100M 2000M' -c 'write 2100M 200M' -c 'write 0 100M'
/ssd/test
shrink:
./qemu-img resize --shrink /ssd/test 50M
bug:
./qemu-img
Am 17.04.2019 um 09:34 hat Stefano Garzarella geschrieben:
> On Mon, Apr 15, 2019 at 10:04:52AM +0200, Kevin Wolf wrote:
> >
> > I think a potential actual use case could be persistent dirty bitmaps
> > for incremental backup. Though maybe that would be better served by
> > using the rbd image
On Mon, Apr 15, 2019 at 10:04:52AM +0200, Kevin Wolf wrote:
>
> I think a potential actual use case could be persistent dirty bitmaps
> for incremental backup. Though maybe that would be better served by
> using the rbd image just as a raw external data file and keeping the
> qcow2 metadata on a
56 matches
Mail list logo