True, but that would happen only in case the host crashes. Even for
a QEMU crash the changes would be safe, I think. They would be
written back when the persistent dirty bitmap's mmap() area is
unmapped, during process exit.
I'd err on the side of caution, mark the persistent dirty
Hmm, right. But do we need the bitmap at all? We can just use
bdrv_is_allocated like bdrv_co_do_readv does.
If a write occur, we read and backup that cluster immediately (out of order). So
I am quite sure we need the bitmap.
Hmm, right. But do we need the bitmap at all? We can just use
bdrv_is_allocated like bdrv_co_do_readv does.
If a write occur, we read and backup that cluster immediately (out of
order). So I am quite sure we need the bitmap.
This is the same as how copy-on-read happens during image
Copy-on-read modifies the topmost image even without changing the content,
just like copy-before-write modifies the backup image.
Ah, got it. But this only works if you use a BS as target. This will not work
with my
old backup driver patches (I am still not convinced about using a BS as backup
Hmm, right. But do we need the bitmap at all? We can just use
bdrv_is_allocated like bdrv_co_do_readv does.
Does that works with a nbd driver? Or does that add another RPC call (slow
down)?
That way you can also implement async replication to a remote site (like MS
do).
Sounds like rsync.
yes, but we need 'snapshots' and something more optimized (rsync compared the
whole files).
I think this can be implemented using the backup job with a specialized backup
driver.
That sounds like more work than a persistent dirty bitmap. The advantage is
that
while dirty bitmaps are consumed by a single user, the Merkle tree can be used
to sync up any number of replicas.
I also consider it safer, because you make sure the data exists (using hash
keys like SHA1).
I
I also consider it safer, because you make sure the data exists (using hash
keys
like SHA1).
I am unsure how you can check if a dirty bitmap contains errors, or is out
of
date?
Also, you can compare arbitrary Merkle trees, whereas a dirty bitmap is
always
related to a single
Maybe the best approach is to maintain a dirty bitmap while the guest is
running, which is fairly cheap. Then you can use the dirty bitmap to only
hash
modified clusters when building the Merkle tree - this avoids reading the
entire disk image.
Yes, this is an good optimization.
I
Is there a git tag for 1.4.2?
Quite interesting. But would it be possible to use corosync for the cluster
communication? The point is that we need corosync anyways for pacemaker, it is
written in C (high performance) and seem to implement the feature you need?
-Original Message-
From: kvm-ow...@vger.kernel.org
We use JGroups (Java library) for reliable multicast communication in
our cluster manager daemon.
I doubt that there is something like 'reliable multicast' - you will run into
many problems when you try to handle errors.
We don't worry about the performance much
since the cluster manager
Another suggestion: use LVM instead of btrfs (to get better performance)
Anyways, I do not know JGroups - maybe that 'reliable multicast' solves
all network problems somehow - Is there any documentation about how
they do it?
OK, found the papers on their web site - quite interesting too.
Also, on _loaded_ systems, I noticed creating/removing logical volumes
can take really long (several minutes); where allocating a file of a
given size would just take a fraction of that.
Allocating a file takes much longer, unless you use a 'sparse' file.
- Dietmar
Do you support multiple guests accessing the same image?
A VM image can be attached to any VMs but one VM at a time; multiple
running VMs cannot access to the same VM image.
I guess this is a problem when you want to do live migrations?
- Dietmar
The formula to compute slice_quota was wrong.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
include/qemu/ratelimit.h |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/include/qemu/ratelimit.h b/include/qemu/ratelimit.h
index c6ac281..d1610f1 100644
--- a/include
The code to compute slice_quota seems buggy. The following fixes the issue:
--- new.orig/include/qemu/ratelimit.h 2012-10-22 07:06:31.0 +0200
+++ new/include/qemu/ratelimit.h2012-10-22 07:06:49.0 +0200
@@ -42,7 +42,7 @@
uint64_t
Is there already a plan to support the differential or incremental
backups in
this Live backup feature?
No, it has been mentioned but no design or patches have been proposed.
It seems like a persistent dirty bitmap would need to be implemented.
Today QEMU has code for dirty bitmaps
Network performance with vhost=on is extremely bad if a guest uses multiple
cores:
HOST kernel: RHEL 2.6.32-279.11.1.el6
KVM 1.2.0
GuestOS: Debian Squeeze (amd64 or 686), CentOS 6.2
Test with something like (install Debian Squeeze first):
./x86_64-softmmu/qemu-system-x86_64 -smp
On Mon, Nov 5, 2012 at 12:51 PM, Dietmar Maurer
diet...@proxmox.com wrote:
Network performance with vhost=on is extremely bad if a guest uses
multiple cores:
HOST kernel: RHEL 2.6.32-279.11.1.el6
KVM 1.2.0
GuestOS: Debian Squeeze (amd64 or 686), CentOS 6.2
Test with something like
./x86_64-softmmu/qemu-system-x86_64 -smp sockets=1,cores=2 -m 512 -
hda
debian-squeeze-netinst.raw -netdev
type=tap,id=net0,ifname=tap111i0,vhost=on -device
virtio-net-pci,netdev=net0,id=net0
Downloading a larger file with wget inside the guest will show the problem.
Speed drops from
Is the network path you are downloading across reasonably idle so that you
get reproducible results between runs?
Also tested with netperf (to local host) now.
Results in short (performance in Mbit/sec):
vhost=off,cores=1: 3982
vhost=off,cores=2: 3930
vhost=off,cores=4: 3912
got incredible bad performance - can't you reproduce the problem?
I have seen a similar problem, but it seems to be only with extreme old guest
kernels. With Ubuntu 12.04 it seems not to be there. Can you try a recent
guest kernel.
I know that it works with newer kernels. But we need to
f417e63a684f3b92f5ff35d256962a2490890f00 M hw
This obviously breaks vhost when using multiple cores.
-Original Message-
From: Peter Lieven [mailto:lieven-li...@dlhnet.de]
Sent: Dienstag, 06. November 2012 08:47
To: Dietmar Maurer
Cc: Stefan Hajnoczi; qemu-devel@nongnu.org; Michael S. Tsirkin
Subject: Re
This obviously breaks vhost when using multiple cores.
With obviously you mean you already have a clue why?
I'll try to reproduce.
No, sorry - just meant the performance regression is obvious (factor 20 to 40).
On 2012-11-06 10:46, Dietmar Maurer wrote:
This obviously breaks vhost when using multiple cores.
With obviously you mean you already have a clue why?
I'll try to reproduce.
No, sorry - just meant the performance regression is obvious (factor 20 to
40).
OK. Did you try
Meanwhile I quickly tried to reproduce but didn't succeed so far (10GBit
between host and guest with vhost=on and 2 guest cores).
However, I finally realized that we are talking about a pretty special host
kernel which I don't have around. I guess this is better dealt with by Red Hat
folks.
Dietmar, how is the speed if you specify --machine pc,kernel_irqchip=off as
cmdline option to qemu-kvm-1.2.0?
I get full speed if i use that flag.
the bug seems to be only relevant when vhost-net is used.
Dietmar, see you implications with normal virtio?
no, only with vhost=on
Tried with upstream qemu on rhel kernel and that's even a bit faster.
So it's ubuntu kernel. vanilla 2.6.32 didn't have vhost at all so maybe their
vhost backport is broken insome way.
You can also reproduce the problem with RHEL6.2 as guest
But it seems RHEL 6.3 fixed it.
There seem to be a
You can also reproduce the problem with RHEL6.2 as guest But it seems
RHEL 6.3 fixed it.
RHEL6.2 on ubuntu host?
I only tested with RHEL6.3 kernel on host.
I only tested with RHEL6.3 kernel on host.
can you check if there is a difference on interrupt delivery between those
two?
cat /proc/interrupts should be sufficient after some traffic has flown.
While trying to reproduce the bug, we just detected that it depends on the
hardware
related code out of qemu.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
Makefile.objs|2 +-
blockdev.c | 263 ++
hmp-commands.hx | 31 +++
hmp.c| 63 +
hmp.h|3 +
monitor.c
more details.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
docs/backup-rfc.txt | 119 +++
1 files changed, 119 insertions(+), 0 deletions(-)
create mode 100644 docs/backup-rfc.txt
diff --git a/docs/backup-rfc.txt b/docs/backup-rfc.txt
new
size is hardcoded to 65536 bytes.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
Makefile.objs |1 +
backup.c | 296 +
block.c | 60 +++-
block.h |6 +
block_int.h | 28 ++
5 files changed, 388
Also added the ratelimit.h fix here, because it is still not upstream.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
include/qemu/ratelimit.h |2 +-
tests/Makefile | 11 +-
tests/backup-test.c | 505 ++
3 files changed
+Note: It turned out that taking a qcow2 snapshot can take a very long
+time on larger files.
Hm, really? What are larger files? It has always been relatively quick when
I
tested it, though internal snapshots are not my focus, so that need not mean
much.
300GB or larger
If this is
Not saying that this is necessarily the best option, but I think reusing
existing
formats and implementation is always a good thing, so it's an idea to
consider.
Yes, I would really like to reuse something. Our current backup software uses
'tar' files,
but that is really inefficient. We
AFAIK qcow2 file cannot store data out of order. In general, an backup
fd is not seekable, and we only want to do sequential writes. Image format
always requires seekable fds?
Ah, this is what you mean by out of order. Just out of curiosity, what are
these non-seekable backup fds usually?
Ah, this is what you mean by out of order. Just out of curiosity,
what are these non-seekable backup fds usually?
/dev/nst0 ;-)
Sure. :-)
But there are better examples. Usually you want to use some kind of
compression, and you do that with existing tools:
# backup to
+* 4 bytes cluster number
Is that sufficient, or is it possible to have an image larger than 64k*4G that
would overflow?
Well, that is 256TB per image. This is sufficient for us.
+* 1 byte not used (reserved)
+
+We only store non-zero blocks (such block is 4096 bytes).
+
+Each
+{ 'type': 'BackupStatus',
+ 'data': {'*status': 'str', '*errmsg': 'str', '*total': 'int',
+ '*transferred': 'int', '*zero-bytes': 'int',
+ '*start-time': 'int', '*end-time': 'int',
+ '*backupfile': 'str', '*uuid': 'str' } }
+
Missing documentation for
+{ 'command': 'backup_cancel' }
+
You are basically adding a new asynchronous job. Do you really need to add
a 'backup-cancel' command, or should we be reusing a generic framework
for canceling arbitrary jobs?
No, we basically add several asynchronous job - one for each blockdev. And we
Did you look at the VMDK Stream-Optimized Compressed subformat?
http://www.vmware.com/support/developer/vddk/vmdk_50_technote.pdf?
src=vmdk
It is a stream of compressed grains (data). They are out-of-order and each
grain comes with the virtual disk lba where the data should be visible to
Is there a 1:1 relationship between BackupInfo and BackupBlockJob? Then it
would be nicer to move BlockupInfo fields into BackupBlockJob (which is
empty right now
No, a backup create several block jobs - one for each blockdev you want to
backup. Those
jobs run in parallel.
Did you look at the VMDK Stream-Optimized Compressed subformat?
http://www.vmware.com/support/developer/vddk/vmdk_50_technote.pdf?
src=vmdk
Max file size 2TB?
It is a stream of compressed grains (data). They are out-of-order and each
grain comes with the virtual disk lba where the data should be visible to the
guest.
The stream also contains grain tables and grain directories. This
metadata makes random read access to the file possible once you
Did you look at the VMDK Stream-Optimized Compressed subformat?
http://www.vmware.com/support/developer/vddk/vmdk_50_technote.pdf?
src=vmdk
And is that covered by any patents?
I use the following qmp commands (file cmds.txt) for testing:
- cmds.txt start
{execute:qmp_capabilities, arguments:{}}
{execute:migrate, arguments:{ uri:exec:cat /dev/null}}
{execute:query-migrate, arguments:{}}
- cmds.txt end
and the following command line to
-Original Message-
From: Stefan Hajnoczi [mailto:stefa...@gmail.com]
Sent: Donnerstag, 22. November 2012 13:45
To: Dietmar Maurer
Cc: qemu-devel@nongnu.org; kw...@redhat.com
Subject: Re: [Qemu-devel] [PATCH 1/5] RFC: Efficient VM backup for qemu
(v1)
On Thu, Nov 22, 2012 at 11:26
The interesting thing is that it works perfectly without --enable-kvm.
Can you please try the following patch?
https://lists.gnu.org/archive/html/qemu-devel/2012-11/msg00174.html
I am unable to apply that patch?
patching file vl.c
Hunk #1 FAILED at 3551.
Hunk #2 succeeded at 3729 with
The interesting thing is that it works perfectly without --enable-kvm.
Can you please try the following patch?
https://lists.gnu.org/archive/html/qemu-devel/2012-11/msg00174.html
I don't think that's related. The only problem I see here, is that
qmp_capabilities can only be executed
It's a naughty thing to do but we could simply pick a new constant and
support LZO as an incompatible option. The file is then no longer compatible
with existing vmdk tools but at least we then have a choice of using
compatible deflate or the LZO extension.
To be 100% incompatible to
The documents says: VMware products are covered by one or more
patents
listed at http://www.vmware.com/go/patents
I simply do not have the time to check all those things, which make that
format unusable for me.
In think proxmox ships the QEMU vmdk functionality today? In that case
Ok, in any case Jan's patch was for a QEMU that didn't start at all with --
enable-kvm -monitor stdio, i.e. really unrelated.
In any case, I cannot reproduce it here.
Any idea what can cause such behaviour?
-Original Message-
From: Stefan Hajnoczi [mailto:stefa...@gmail.com]
Sent: Donnerstag, 22. November 2012 18:02
To: Dietmar Maurer
Cc: kw...@redhat.com; qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 1/5] RFC: Efficient VM backup for qemu
(v1)
On Thu, Nov 22, 2012 at 4
Did you look at the VMDK Stream-Optimized Compressed subformat?
We've gone down several sub-threads discussing whether VMDK is suitable.
I want to summarize why this is a good approach:
The VMDK format already allows for out-of-order data and is supported by
existing tools - this is very
The VMDK format has strong disadvantages:
- unclear License (the spec links to patents)
- they use a very slow compression algorithm (deflate), which makes it
unusable for backup
Seems they do not support multiple configuration files. You can only
a single text block, and that needs to
QEMU's implementation has partial support for Stream-Optimized
Compressed images. If you complete the code for this subformat, not only
does this benefit the VM Backup feature, but it also makes qemu-img convert
more powerful for everyone. I hope we can kill two birds with one stone
The doc
The VMDK format already allows for out-of-order data and is supported by
existing tools - this is very important for backups where people are
(rightfully) paranoid about putting their backups in an obscure format. They
want to be able to access their data years later, whether your tool is
But keep in mind that any other company out there could have a patent on
out-of-order data in an image file or other aspects of what you're proposing.
Sorry, but the vmware docs explicitly include a pointer to those patents. So
this
is something completely different to me.
The VMDK format has strong disadvantages:
- unclear License (the spec links to patents)
I've already pointed out that you're taking an inconsistent position on this
point. It's FUD.
- they use a very slow compression algorithm (deflate), which makes it
unusable for backup
I've
To make progress here I'll review the RFC patches. VMDK or not isn't the
main thing, a backup feature like this looks interesting.
Yes, a 'review' would be great - thanks.
- Dietmar
We have more timers than the one based on vm_clock. Does this help?
Yes, that patch fixes the problem for me.
diff --git a/qemu-char.c b/qemu-char.c
index 88f4025..242b799 100644
--- a/qemu-char.c
+++ b/qemu-char.c
@@ -134,9 +134,9 @@ static void qemu_chr_fire_open_event(void
*opaque)
qcow2 snapshot on newly created files are fast:
# qemu-img create -f qcow2 test.img 200G
# time qemu-img snapshot -c snap1 test.img
real 0m0.014s
but if metadata is allocated it gets very slow:
# qemu-img create -f qcow2 -o preallocation=metadata test.img 200G
# time qemu-img snapshot -c
In short, the idea is that you can stick filters on top of a
BlockDriverState, so
that any read/writes (and possibly more requests, if necessary) are routed
through the filter before they are passed to the block driver of this BDS.
Filters would be implemented as BlockDrivers, i.e. you could
Yup, it's already not too bad. I haven't looked into it in much
detail, but I'd like to reduce it even a bit more. In particular, the
backup_info field in the BlockDriverState feels wrong to me. In the
long term the generic block layer shouldn't know at all what a backup
is, and baking
Yup, it's already not too bad. I haven't looked into it in much
detail, but I'd like to reduce it even a bit more. In particular, the
backup_info field in the BlockDriverState feels wrong to me. In the
long term the generic block layer shouldn't know at all what a backup
is, and baking
Is there a 1:1 relationship between BackupInfo and BackupBlockJob? Then it
would be nicer to move BlockupInfo fields into BackupBlockJob (which is
empty right now). Then you also don't need to add fields to BlockDriverState
because you know that if your blockjob is running you can access it
My plan was to have something like bs-job-job_type-
{before,after}_write.
int coroutine_fn (*before_write)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov,
void **cookie);
int coroutine_fn (*after_write)(BlockDriverState *bs,
My plan was to have something like bs-job-job_type-
{before,after}_write.
int coroutine_fn (*before_write)(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov,
void **cookie);
int coroutine_fn (*after_write)(BlockDriverState *bs,
BTW, will such filters work with the new virtio-blk-data-plane?
No, virtio-blk-data-plane is a hack and will be slowly rewritten to support
all
fancy features.
Ah, good to know ;-) thanks.
Filters would be implemented as BlockDrivers, i.e. you could
implement
.bdrv_co_write() in a filter to intercept all writes to an image.
I am quite unsure if that make things easier.
At least it would make for a much cleaner design compared to putting code
for every feature you can
Actually this was plan B, as a poor-man implementation of the filter
infrastructure. Plan A was that the block filters would materialize suddenly
in
someone's git tree.
OK, so let us summarize the options:
a.) wait untit it materialize suddenly in someone's git tree.
b.) add BlockFilter
Filters would be implemented as BlockDrivers, i.e. you could
implement
.bdrv_co_write() in a filter to intercept all writes to an image.
I am quite unsure if that make things easier.
At least it would make for a much cleaner design compared to putting
code for every feature
Simple regression tests using vma-reader and vma-writer.
Note: the call to g_thread_init() solves problems with g_slice_alloc() -
without that call we get arbitrary crashes.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
tests/Makefile | 11 +-
tests/backup-test.c | 517
: remove cancel_cb
* use enum for BackupFormat
* vma: use bdrv_open instead of bdrv_file_open
* vma: fix aio, use O_DIRECT
* backup one drive after another (try to avoid high load)
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
docs/backup-rfc.txt | 119
.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
backup.h | 12 ++
blockdev.c | 423 ++
hmp-commands.hx | 31
hmp.c| 63
hmp.h|3 +
monitor.c|7 +
qapi-schema.json
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
blockdev.c | 196 +-
hmp.c|3 +-
qapi-schema.json |6 +-
3 files changed, 200 insertions(+), 5 deletions(-)
diff --git a/blockdev.c b/blockdev.c
index 50e150d
to serialize access.
Currently backup cluster size is hardcoded to 65536 bytes.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
Makefile.objs|1 +
backup.c | 338 ++
backup.h | 32 +
block.c
* Backup to a single archive file
* Backup contain all data to restore VM (full backup)
* Do not depend on storage type or image format
* Avoid use of temporary storage
* store sparse images efficiently
It is customary to send a 0/6 cover letter for details like this, rather than
in the actual git
history.
OK, I will send a cover-letter next time.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
docs/backup-rfc.txt | 119
+++
1 files changed, 119 insertions(+), 0 deletions(-) create mode
100644 docs/backup
+++ b/qapi-schema.json
@@ -425,6 +425,39 @@
{ 'type': 'EventInfo', 'data': {'name': 'str'} }
##
+# @BackupStatus:
+#
+# Detailed backup status.
+#
+# @status: #optional string describing the current backup status.
+# This can be 'active', 'done', 'error'. If this
to serialize access.
Currently backup cluster size is hardcoded to 65536 bytes.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
Makefile.objs|1 +
backup.c | 338 ++
backup.h | 32 +
block.c
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
blockdev.c | 196 +-
hmp.c|3 +-
qapi-schema.json |5 +-
3 files changed, 200 insertions(+), 4 deletions(-)
diff --git a/blockdev.c b/blockdev.c
index 1cfc780
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
docs/backup.txt | 116 +++
1 files changed, 116 insertions(+), 0 deletions(-)
create mode 100644 docs/backup.txt
diff --git a/docs/backup.txt b/docs/backup.txt
new file mode 100644
index
.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
backup.h | 12 ++
blockdev.c | 423 ++
hmp-commands.hx | 31
hmp.c| 63
hmp.h|3 +
monitor.c|7 +
qapi-schema.json
Simple regression tests using vma-reader and vma-writer.
Note: the call to g_thread_init() solves problems with g_slice_alloc() -
without that call we get arbitrary crashes.
Signed-off-by: Dietmar Maurer diet...@proxmox.com
---
tests/Makefile | 11 +-
tests/backup-test.c | 517
* removed comment about slow qcow2 snapshot bug
* fix spelling errors
* fixed copyright
* change 'backupfile' parameter name to 'backup-file'
* change 'config-filename' parameter name to 'config-file'
* add documentation for 'devlist' parameter.
* rename backup_cancel to backup-cancel
Dietmar Maurer
Interesting series, the backup block job makes sense to me. Regarding
efficiency, I think incremental backup is a must, otherwise regular backups
using this approach won't really be a win over backing files.
Incremental backup is something different, not touched by this code.
You can add that
Thanks for writing this up. I don't think docs/backup.txt should be committed
as-is though because it refers to you proposing this patch series. Once
merged
some of this document will no longer be relevant.
Why will it be no longer relevant? It explain the basic idea.
You could include
+bit = cluster_num % BITS_PER_LONG;
+val = job-bitmap[idx];
+if (dirty) {
+if (!(val (1UL bit))) {
+val |= 1UL bit;
+}
+} else {
+if (val (1UL bit)) {
+val = ~(1UL bit);
+}
+}
The set and
Another option would be to simply dump
devid,cluster_num,cluster_data to the output fh (pipe), and an
external binary saves the data. That way we could move the whole archive
format related code out of qemu.
That sounds like the NBD option - write the backup to an NBD disk image.
The
Note: the call to g_thread_init() solves problems with g_slice_alloc() -
without
that call we get arbitrary crashes.
This should be a comment in the code. GLib needs to be running in
multithreaded mode in order for the GSlice allocator to be thread-safe.
This is why you see crashes
This should call bdrv_is_allocated_above like the other block jobs do.
It would be needed later anyway to backup only the topmost image.
I do not need that information now, so why do you want that I add dead code?
Interesting series, the backup block job makes sense to me. Regarding
efficiency, I think incremental backup is a must,
One can easily implement incremental backup on top of this patches. That is why
I introduced the BackupDriver abstraction.
otherwise regular backups using
this approach
+This format contains a header which includes the VM configuration as
+binary blobs, and a list of devices (dev_id, name).
Is there a magic number, for quickly identifying whether a file is likely to
be a
vma?
Yes ('VMA')
What endianness are multi-byte numbers interpreted with?
This should call bdrv_is_allocated_above like the other block jobs do.
It would be needed later anyway to backup only the topmost image.
I do not need that information now, so why do you want that I add dead code?
I think you do. You're wasting time reading unallocated clusters and
This should call bdrv_is_allocated_above like the other block jobs do.
It would be needed later anyway to backup only the topmost image.
I do not need that information now, so why do you want that I add dead
code?
I think you do. You're wasting time reading unallocated clusters
1 - 100 of 292 matches
Mail list logo