Re: [dm-devel] dm-thin: Why is DATA_DEV_BLOCK_SIZE_MIN_SECTORS set to 64k?

2018-06-12 Thread Joe Thornber
On Sat, Jun 09, 2018 at 07:31:54PM +, Eric Wheeler wrote:
> I understand the choice.  What I am asking is this: would it be safe to 
> let others make their own choice about block size provided they are warned 
> about the metadata-chunk-size/pool-size limit tradeoff?
> 
> If it is safe, can we relax the restriction?  For example, 16k chunks 
> still enables ~4TB pools, but with 1/4th of the CoW IO overhead on heavily 
> snapshotted environments.

Yes, it would be safe.  There are downsides though; all io gets split
on block size boundaries, so dropping to 16k or smaller could
seriously increase the cpu usage.  Smaller blocks also means more
mappings, more metadata, more kernel memory consumption.

- Joe

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] dm-thin: Why is DATA_DEV_BLOCK_SIZE_MIN_SECTORS set to 64k?

2018-06-09 Thread Zdenek Kabelac

Dne 9.6.2018 v 21:31 Eric Wheeler napsal(a):

On Fri, 18 May 2018, Zdenek Kabelac wrote:


Dne 18.5.2018 v 01:36 Eric Wheeler napsal(a):

Hello all,

Is there a technical reason that DATA_DEV_BLOCK_SIZE_MIN_SECTORS is
limited to 64k?

I realize that the metadata limits the maximum mappable pool size, so it
needs to be bigger for big pools---but it is also the minimum COW size.

Looking at the code this is enforced in pool_ctr() but isn't used anywhere
else in the code.  Is it strictly necessary to enforce this minimum?




Hi

Selection of 64k  was chosen as compromise between used space for metadada,
locking contention, kernel memory usage and overall speed performance.


I understand the choice.  What I am asking is this: would it be safe to
let others make their own choice about block size provided they are warned
about the metadata-chunk-size/pool-size limit tradeoff?

If it is safe, can we relax the restriction?  For example, 16k chunks
still enables ~4TB pools, but with 1/4th of the CoW IO overhead on heavily
snapshotted environments.


Hi

I can't speak for actual DM target developer - but in real world - when user 
starts to update a block - in most cases further surrounding blocks are also 
usually modified.


So it would need to be probably seen if there is some real word scenario where 
it proves there is major measurable gain by using smaller chunks (of course we 
can make a synthetic  workload writing every n-th  sector - but would be 
probably useful to see a real-case where it shows a good need for smaller 
chunks as the  memory and locking  resources usage would certainly scale a lot
a there are users for whoms 'performance' lose of 64k chunk is still too big 
and need to use bigger chunks even with snapshots.


Regards

Zdenek

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] dm-thin: Why is DATA_DEV_BLOCK_SIZE_MIN_SECTORS set to 64k?

2018-06-09 Thread Eric Wheeler
On Fri, 18 May 2018, Zdenek Kabelac wrote:

> Dne 18.5.2018 v 01:36 Eric Wheeler napsal(a):
> > Hello all,
> > 
> > Is there a technical reason that DATA_DEV_BLOCK_SIZE_MIN_SECTORS is
> > limited to 64k?
> > 
> > I realize that the metadata limits the maximum mappable pool size, so it
> > needs to be bigger for big pools---but it is also the minimum COW size.
> > 
> > Looking at the code this is enforced in pool_ctr() but isn't used anywhere
> > else in the code.  Is it strictly necessary to enforce this minimum?
> > 
> 
> 
> Hi
> 
> Selection of 64k  was chosen as compromise between used space for metadada,
> locking contention, kernel memory usage and overall speed performance.

I understand the choice.  What I am asking is this: would it be safe to 
let others make their own choice about block size provided they are warned 
about the metadata-chunk-size/pool-size limit tradeoff?

If it is safe, can we relax the restriction?  For example, 16k chunks 
still enables ~4TB pools, but with 1/4th of the CoW IO overhead on heavily 
snapshotted environments.

--
Eric Wheeler



> 
> 
> Regards
> 
> Zdenek
> 
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
> 

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


Re: [dm-devel] dm-thin: Why is DATA_DEV_BLOCK_SIZE_MIN_SECTORS set to 64k?

2018-05-18 Thread Zdenek Kabelac

Dne 18.5.2018 v 01:36 Eric Wheeler napsal(a):

Hello all,

Is there a technical reason that DATA_DEV_BLOCK_SIZE_MIN_SECTORS is
limited to 64k?

I realize that the metadata limits the maximum mappable pool size, so it
needs to be bigger for big pools---but it is also the minimum COW size.

Looking at the code this is enforced in pool_ctr() but isn't used anywhere
else in the code.  Is it strictly necessary to enforce this minimum?




Hi

Selection of 64k  was chosen as compromise between used space for metadada, 
locking contention, kernel memory usage and overall speed performance.


If there is a case, where using 4K chunks for snapshot is giving a major 
advantage, there is still availability to use old snapshot target.



Regards

Zdenek

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


[dm-devel] dm-thin: Why is DATA_DEV_BLOCK_SIZE_MIN_SECTORS set to 64k?

2018-05-17 Thread Eric Wheeler
Hello all,

Is there a technical reason that DATA_DEV_BLOCK_SIZE_MIN_SECTORS is 
limited to 64k?  

I realize that the metadata limits the maximum mappable pool size, so it 
needs to be bigger for big pools---but it is also the minimum COW size.  

Looking at the code this is enforced in pool_ctr() but isn't used anywhere 
else in the code.  Is it strictly necessary to enforce this minimum?

Thanks for your help!

--
Eric Wheeler

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel