On Fri, Aug 14, 2015 at 08:14:46PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> On 10.06.2015 17:30, Stefan Hajnoczi wrote:
> >On Mon, Jun 08, 2015 at 06:21:20PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> >>+    ret = bdrv_pread(bs->file, bm->l1_table_offset, l1_table,
> >>+                     bm->l1_size * sizeof(uint64_t));
> >>+    if (ret < 0) {
> >>+        goto fail;
> >>+    }
> >>+
> >>+    buf = g_malloc0(bm->l1_size * s->cluster_size);
> >What is the maximum l1_size value?  cluster_size and l1_size are 32-bit
> >so with 64 KB cluster_size this overflows if l1_size > 65535.  Do you
> >want to cast to size_t?
> 
> Hmm. What the maximum RAM space we'd like to spend on dirty bitmap? I think
> 4Gb is too much.. So here should be limited not the l1_size but number of
> bytes needed to store the bitmap. What is maximum disk size we are dealing
> with?

Modern file systems support up to exa- (XFS) or zetta- (ZFS) byte size.
If the disk image size is large, then the cluster size will probably
also be set larger than 64 KB (e.g. 1 MB).

Anyway, with 64 KB cluster size & bitmap granularity a 128 MB dirty
bitmap covers a 64 TB disk image.  So how about 256 MB or 512 MB max
dirty bitmap size?

Stefan

Reply via email to