Reviewed:  https://review.openstack.org/616580
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=14d98ef1b48ca7b2ea468a8f1ec967b954955a63
Submitter: Zuul
Branch:    master

commit 14d98ef1b48ca7b2ea468a8f1ec967b954955a63
Author: Jens Harbott <j.harb...@x-ion.de>
Date:   Thu Nov 8 15:06:26 2018 +0000

    Make supports_direct_io work on 4096b sector size
    
    The current check uses an alignment of 512 bytes and will fail when the
    underlying device has sectors of size 4096 bytes, as is common e.g. for
    NVMe disks. So use an alignment of 4096 bytes, which is a multiple of
    512 bytes and thus will cover both cases.
    
    Change-Id: I5151ae01e90506747860d9780547b0d4ce91d8bc
    Closes-Bug: 1801702
    Co-Authored-By: Alexandre Arents <alexandre.are...@corp.ovh.com>


** Changed in: nova
       Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1801702

Title:
  Spawn may fail when cache=none on block device with logical block size
  > 512

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Triaged
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress

Bug description:
  Description
  ===========
  When we spawn instances without cache enabled (cache='none') on a file system
  there a check in nova code that test if file system support direct IO:
  https://github.com/openstack/nova/blob/master/nova/privsep/utils.py#L34
  Because this test use 512b alignment size it seems to failed on newer block 
device that have
  logical block size > 512b like nvme:

  parted /dev/nvme0n1 print | grep "Sector size"
  Sector size (logical/physical): 4096B/4096B

  reason should be that alignement size of direct io must be a multiple
  of logical block size of underlying device (not of fs block size) as
  explain here:

  http://man7.org/linux/man-pages/man2/open.2.html
   O_DIRECT
         ...
         Under Linux 2.4, transfer sizes, and the alignment of the user buffer
         and the file offset must all be multiples of the logical block size
         of the filesystem.  Since Linux 2.6.0, alignment to the logical block
         size of the underlying storage (typically 512 bytes) suffices

  Because this test failed, it fallbacks value of cache to "writethrough" which 
have following consequences:
  1) qemu run without direct io even device/fs support but with higher block 
size
  2) qemu failed to start because cache=writethrough may conflict with other 
dev paramer like "io=native": with the following message:

  2018-08-22 20:50:41.226 80512 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 1065, in createWithFlags
  2018-08-22 20:50:41.226 80512 ERROR oslo_messaging.rpc.server     if ret == 
-1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
  2018-08-22 20:50:41.226 80512 ERROR oslo_messaging.rpc.server libvirtError: 
unsupported configuration: native I/O needs either no disk cache or directsync 
cache mode, QEMU will fallback to aio=threads

  
  Steps to reproduce
  ==================
  to reproduce spawn issue:
  having instances on fs with block device with logical block size > 512b 
(typically nvme with 4096 8192 sector size)
  nova.conf with:
  images_type=raw
  preallocate_images=space

  Solution
  ========
  Can we consider increasing align_size from 512b to 8192b as it will work on 
most cases?
  Is there any other reason to keep 512b ?

  Set it to 4096 or 8192 fix the issue in my environment.

  Environment
  ===========
  I met the issue on newton, but same check with 512b exists on master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1801702/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to     : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

Reply via email to