Re: [Openstack] [Sheepdog][Libvirt][Qemu]Add a new block storage driver by Libvirt/Qemu way for Openstack

2013-01-25 Thread MORITA Kazutaka
At Fri, 25 Jan 2013 19:05:06 +0800,
harryxiyou wrote:
 
 On Sat, Jan 19, 2013 at 10:04 PM, MORITA Kazutaka
 morita.kazut...@gmail.com wrote:
 [...]
  If you do the above work, I think you can use your file system with
  OpenStack.
 
  But I suggest doing them step by step.  If your file system is not
  supported in QEMU, I think libvirt won't support it.  If libvirt
  doesn't support it, OpenStack shouldn't support it too.
 
 
 Hi Mortita,
 
 If i just wanna test sheepdog driver in Libvirt separately(without
 QEMU and Openstack),
 how should i do this job. You can suppose i wanna test if sheepdog
 driver, you add, is
 working well in Libvirt. Could you please give me some suggestions?
 Thanks in advance ;-)

Libvirt documentation contains an example XML format for Sheepdog.
  http://libvirt.org/formatdomain.html#elementsDisks

You're CCing too many lists which is not appropriate for your
question.  Please ask Sheepdog questions in Sheepdog users mailing
list.
  sheepdog-us...@lists.wpkg.org

Thanks,

Kazutaka

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-dev][Sheepdog]Add a new driver for Openstack Cinder like Sheepdog volumes

2013-01-19 Thread MORITA Kazutaka
At Sat, 19 Jan 2013 13:14:42 +0800,
harryxiyou wrote:
 
 On Sat, Jan 19, 2013 at 12:24 PM, MORITA Kazutaka
 morita.kazut...@gmail.com wrote:
  At Fri, 18 Jan 2013 22:56:38 +0800,
 [...]
 
  The answer depends on the protocol between QEMU and HLFS.  What is
  used for accessing HLFS volumes from QEMU?  Is it iSCSI, NFS, or
  something else?
 
 
 Actually, we just realize the block driver interfaces QEMU provided.
 You can see our
 patch from
 http://code.google.com/p/cloudxy/source/browse/trunk/hlfs/patches/hlfs_driver_for_qemu.patch
 
 And what about Sheepdog? What is used for accessing Sheepdog volumes from 
 QEMU?
 Is it iSCSI, NFS, or something else?

Sheepdog uses an own protocol, and I think your file system is similar.

IIUC, what you need to do are:
 1. modify libvirt so that you can specify your file system as a QEMU
disk
 2. add a volume driver to Cinder to handle your file system

You don't need to modify Nova.  You can use
nova.virt.libvirt.volume.LibvirtNetVolumeDriver as a libvirt volume
driver like sheepdog and rbd.

Thanks,

Kazutaka

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Sheepdog][Libvirt][Qemu]Add a new block storage driver by Libvirt/Qemu way for Openstack

2013-01-19 Thread MORITA Kazutaka
At Sat, 19 Jan 2013 16:47:37 +0800,
harryxiyou wrote:
 
 Hi all,
 
 I wanna add a new block storage driver by Libvirt/Qemu way for Openstack, 
 which
 is as same as Sheepdog driver for Openstack. So i think the theories
 are like this.
 
 1, In the Openstack Nova branch, Openstck driver call libvirt client
 and send parameters
 to libvirt client.(From this point, i should modify Openstack Nova
 source codes. They are
 a, nova/nova/virt/libvirt/driver.pyadd new driver way
 b, /OpenStack/nova/nova/tests/test_libvirt.py  add new driver test)
 
 2, According to own protocol, libvirt client in Openstack Nova branch
 send parameters to
 Libvirt server.(From this point, i should modify libvirt library to
 let libvirt library support this
 new driver like Sheepdog).
 
 3, Libvirt server call Qemu interfaces to send parameters to
 Qemu.(From this point, i should
 modify Qemu source codes to let Qemu support this new driver like Sheepdog).
 
 4, In Openstack Cinder branch, Openstack driver use Qemu commands to
 create this new volumes
 to Qemu.(From this point, i should modify Openstack Cinder branch
 source codes like this.
 a, Add new driver file
 /OpenStack/cinder/cinder/volume/drivers/new_driver.py like Sheepdog.py
 b, Change file /OpenStack/cinder/cinder/tests/test_drivers_compatibility.py
 to test new driver).
 
 5, At last, i should also modify
 /OpenStack/manuals/doc/src/docbkx/openstack-compute-admin/tables/hypervisors-nova-conf.xml
 to configure this new driver.
 
 Are my theories right? Should i do any other stuffs? Could anyone give
 me any other suggestions?

If you do the above work, I think you can use your file system with
OpenStack.

But I suggest doing them step by step.  If your file system is not
supported in QEMU, I think libvirt won't support it.  If libvirt
doesn't support it, OpenStack shouldn't support it too.

Thanks,

Kazutaka

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Openstack-dev][Sheepdog]Add a new driver for Openstack Cinder like Sheepdog volumes

2013-01-18 Thread MORITA Kazutaka
At Fri, 18 Jan 2013 22:56:38 +0800,
harryxiyou wrote:
 
 Hi Morita and other developers,
 
 If i add a QEMU/Libvirt driver(the same as Sheepdog volumes driver in
 Openstack Cinder branch) to let Openstack Cinder support a new block-level
 storage system, I should change following stuffs, right?
 
 1, Add a driver file to the dir in Openstack Cinder branch(the same as
 sheepdog.py),
   https://github.com/openstack/cinder/blob/master/cinder/volume/drivers
 2, Change the file in Openstack Nova branch(Let libvirt attach HLFS volumes
 to QEMU, the same as sheepdog),
   
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py
 
 Do I should change/add any other files for Openstack? Cloud anyone give me
 some suggestions?Thanks in advance ;-)

The answer depends on the protocol between QEMU and HLFS.  What is
used for accessing HLFS volumes from QEMU?  Is it iSCSI, NFS, or
something else?

Thanks,

Kazutaka

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] rsyslog daemon reloading causes swift related services hangs and CPU reach to 100%

2012-06-21 Thread MORITA Kazutaka
At Fri, 22 Jun 2012 00:00:26 +0800,
Kuo Hugo wrote:
 
 Hi folks ,
 
 We're facing an issue related to the bug as below
 
 /dev/log rotations can cause object-server failures
 https://bugs.launchpad.net/swift/+bug/780025
 
 My Swift version : 1.4.9
 
 But I found that not only object-server but also all swift related workers
 those log through rsyslog.
 There's a easy way to reproduce it ,
 1. Run swift-bench
 2. restart/stop rsyslog during swift-bench progress
 
 You can see that all CPU usage reach to 100%
 
 Should it be an additional bug ? If so , I can file it .
 
 Is there anyway to improve this behavior ? I expect that all swift workers
 should keep working even though that rsyslog dead or restart.

I've faced with the same problem and found that it was a bug of the
python logging module.  I think the following patch against the module
would solve the problem.

diff --git a/logging/handlers.py b/logging/handlers.py
index 756baf0..d2a042a 100644
--- a/logging/handlers.py
+++ b/logging/handlers.py
@@ -727,7 +727,11 @@ class SysLogHandler(logging.Handler):
 except socket.error:
 self.socket.close()
 self.socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
-self.socket.connect(address)
+try:
+self.socket.connect(address)
+except socket.error:
+self.socket.close()
+raise

 # curious: when talking to the unix-domain '/dev/log' socket, a
 #   zero-terminator seems to be required.  this string is placed

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] lunr reference iSCSI target driver

2011-05-03 Thread MORITA Kazutaka
At Tue, 03 May 2011 12:19:50 -0700,
Josh Durgin wrote:
 
 On 05/02/2011 01:46 PM, Chuck Thier wrote:
  This leads to another interesting question.  While our reference
  implementation may not directly expose snapshot functionality, I imagine
  other storage implementations could want to. I'm interested to hear what use
  cases others would be interested in with snapshots.  The obvious ones are
  things like creating a volume based on a snapshot, or rolling a volume back
  to a previous snapshot.  I would like others' input here to shape what the
  snapshot API might look like.
 
 For RBD we only need the obvious ones:
 - create/list/remove snapshots
 - create volume from a snapshot
 - rollback to a snapshot

These are the same for Sheepdog too.

Thanks,

Kazutaka

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp