-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24692/#review50562
-----------------------------------------------------------

Ship it!


Ship It!

- Joris van Lieshout


On Aug. 14, 2014, 7:51 a.m., Brenn Oosterbaan wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24692/
> -----------------------------------------------------------
> 
> (Updated Aug. 14, 2014, 7:51 a.m.)
> 
> 
> Review request for cloudstack, daan Hoogland, Joris van Lieshout, and Sanjay 
> Tripathi.
> 
> 
> Bugs: CLOUDSTACK-7345
>     https://issues.apache.org/jira/browse/CLOUDSTACK-7345
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> -------
> 
> CLOUDSTACK-7345 patch to change dd command from 4M to 128k when using NFS
> 
> 
> Diffs
> -----
> 
>   scripts/vm/hypervisor/xenserver/vmopsSnapshot 85444dc 
> 
> Diff: https://reviews.apache.org/r/24692/diff/
> 
> 
> Testing
> -------
> 
> Starting multiple dd commands with bs=4M on a single hypervisor causes nfs 
> server timed out messages in /var/log/kern.log and causes the dd process to 
> crash.
> With bs=4M NFS debugging shows 64 seperate 64k reads are done, after which 
> the inode is updated 64 times (access time) and 32 128k writes are done to 
> secondary storage.
> NFS debugging showed the 'nfs server timed out' messages usually occured 
> during the 64 inode updates (of the same file) from the read process.
> 
> Running multiple dd commands with bs=128k on a single hypervisor does not 
> cause nfs server timed out messages in /var/log/kern.log.
> With bs=128k NFS debugging shows 2 seperate 64k reads are done (and indoe 
> updates) and 1 128k write is done.
> 
> We are running this change in production. The nfs server timed out messages 
> are gone and not a single snapshot process has broken (previosuly this 
> happened every night).
> 
> 
> Thanks,
> 
> Brenn Oosterbaan
> 
>

Reply via email to