Thanks for your thorough answer and explaination. I have gone with the direct lun to start with. I bought new Sata disks and added them to the storage chassi for the purpose of file sharing.
Regards, Niklas > On 17 jul 2014, at 14:33, "Daniel Helgenberger" > <daniel.helgenber...@m-box.de> wrote: > > Hello, > > for some reason the message was flagged spam; so there was a deplay. > > oVirt supports direct LUNs. These LUNs are often already partitions of > some RAID enclosure. > AFAIK the MSA60 is a JBOD. You can use the p411 controller to create > your partitions / LUNs. > > The Virtio-SCSI paravirt driver supports a wide range of (=most) SCSI > commands. This way clients can access them as 'real' SCSI devices. > > If you did partition the LUN with parted, then the client(s) will see > these partitions also, along with the file system on it. > > As you might know, you cannot have 'normal' file systems mounted rw on > several machines at one, you need a cluster file system; see examples in > [1]; there are several open source FS's around. > > Also, nothing stops you from mounting a file system read only on several > hosts. > > One note on the subject, though: > I consider shared disk file systems as 'old' approach. I support them in > our setup because of historical reasons. > Today in a new deployment I would tend to use more 'modern' scale out > file systems like GlusterFS (support in oVirt is quite well) or > Ceph/Rados as object store file system. Using the native clients, you > basically have a shared disk file system with less bottlenecks (MDC's in > shared disk fs). Also, both examples have APIs - an application using > this can greatly benefit in performance. Again, I consider using APIs > for file storage the approach of the future. > > If you need to attach NFS / CIFS clients, you can always reshare these > file systems and use (p)NFS or CTDB if you want to cluster this. > > Note, with one host and one JBOD this makes little sense to me. > > [1] http://en.wikipedia.org/wiki/Clustered_file_system > [2] https://ctdb.samba.org/ > >> On Mo, 2014-07-14 at 08:10 +0000, Niklas Fondberg wrote: >> Thanks, after reading it makes sense. I suppose I need to drop my hope of >> having sharing possibility with the host. >> I have two questions that you might be able to answer: >> 1. Does direct LUN support partitions or only whole devices >> 2. Do you know of any open source way of making them shareable to Ovirt? >> >> My setup is simple: >> - HP DL380 with dual Xeon and lots of RAM. Boots from seperate disk (USB) >> - MSA60 with p411 attached to the HP DL380 >> >> When we grow oVirt we will grow with DL360¹s and use all shared storage >> from the guest fileserver on the first host. >> >> >> >> On 14/07/14 08:57, "Daniel Helgenberger" <daniel.helgenber...@m-box.de> >> wrote: >> >>> Hello, >>> >>> just add my 2ct: I did a lot of bench marking for our SAN (FC LUN's). I >>> also need file servers for our SMB Clients. >>> >>> I recommend using Direct Attached LUNs for your purpose and attach them >>> to the VMs as VirtIO-SCSI disks. You can even added them as shareable to >>> oVirt if you deploy some kind of SAN file system (we use Quantum's >>> StorNext). >>> >>> Bottom line, the implementation of VirtIO-SCSI is so well done and >>> support in oVirt is great. I cound not see any bottlenecks in the >>> visualization. For the foreseeable future I will not deploy bare metal >>> file Servers again. >>> >>> HTH, >>> >>>> On So, 2014-07-13 at 15:47 +0000, Niklas Fondberg wrote: >>>> >>>> From: Karli Sjöberg <karli.sjob...@slu.se<mailto:karli.sjob...@slu.se>> >>>> Date: Sunday 13 July 2014 14:51 >>>> To: Niklas Fondberg <nik...@vireone.com<mailto:nik...@vireone.com>> >>>> Cc: "users@ovirt.org<mailto:users@ovirt.org>" >>>> <users@ovirt.org<mailto:users@ovirt.org>>, Karli Sjöberg >>>> <karli.sjob...@slu.se<mailto:karli.sjob...@slu.se>> >>>> Subject: Re: [ovirt-users] fileserver as a guest oVirt >>>> >>>> >>>> Den 12 jul 2014 22:49 skrev Niklas Fondberg >>>> <nik...@vireone.com<mailto:nik...@vireone.com>>: >>>>> >>>>> >>>>> >>>>>> On 12 jul 2014, at 16:57, "Karli Sjöberg" >>>>> <karli.sjob...@slu.se<mailto:karli.sjob...@slu.se>> wrote: >>>>> >>>>>> >>>>>> Den 12 jul 2014 15:45 skrev Niklas Fondberg >>>> <nik...@vireone.com<mailto:nik...@vireone.com>>: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I'm new to oVirt but I must say I am impressed! >>>>>>> I am running it on a HP DL380 with an external SAS chassi. >>>>>>> Linux dist is Centos 6.5 and oVirt is 3.4 running all-in-one (for >>>> now until we need to have a second host). >>>>>>> >>>>>>> Our company (www.vireone.com) deals with system architecture for >>>> many telco and media operators and is now setting up a small own >>>> datacenter for our internal tests as well as our IT infrastructure. >>>>>>> We are in the process of installing Zentyal for the SMB purposes >>>> on a guest and it would be great to have that guest also serving a fs >>>> path directory with NFS + SMB (which is semi crippled on the host after >>>> oVirt installation with version 3 et.c.). >>>>>>> >>>>>>> Does anyone have an idea of how I can through oVirt (seen several >>>> solutions using virsh and kvm) letting my Zentyal Ubuntu guest have >>>> access to a host mount point or if necessary (second best) a seperate >>>> partition? >>>>>>> >>>>>>> Best regards >>>>>>> Niklas >>>>>> >>>>>> Why not just give the guest a thin provision virtual hard drive and >>>> expand it on a demand basis? >>>>>> >>>>>> /K >>>>> >>>>> Thanks for the advise but this would not suite us I'm afraid. It >>>> would be difficult wrt incremental backups as well as host machine >>>> file-routines. >>>> >>>> Well, going by Occam's raizor; the simplest answer is usually correct. >>>> Can't really tell what you mean by file-routines but backups would be >>>> well served by snapshots (can't get more incremental than that) and >>>> disaster recovery could be as easy as a rsync from inside the guest to a >>>> remote machine. >>>> >>>> The biggest pros here is the ease of being able to setup an export >>>> domain, attach, export the VM, detach domain, and then attach and import >>>> to a "real" setup when the AIO starts feeling crowded later on. Thinking >>>> ahead is never a bad thing, no? >>>> >>>> /K >>>> >>>> Thanks for your suggestions! >>>> The thing also is that the performance will be very bad if we have the >>>> 25TB SAS array shared for our purposes (lots of media streaming) using a >>>> virtual disk. >>>> What I am after (after more reading) is support for virtio-9p-pci >>>> (http://www.linux-kvm.org/page/9p_virtio) using oVirt. Alternative is >>>> the Direct LUN hook (http://www.ovirt.org/VDSM-Hooks/directlun, if I can >>>> figure out how to work with hooks...) >>>> Any chance anybody has an answer for these questions? >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> -- >>> >>> Daniel Helgenberger >>> m box bewegtbild GmbH >>> >>> P: +49/30/2408781-22 >>> F: +49/30/2408781-10 >>> >>> ACKERSTR. 19 >>> D-10115 BERLIN >>> >>> >>> www.m-box.de www.monkeymen.tv >>> >>> Geschäftsführer: Martin Retschitzegger / Michaela Göllner >>> Handeslregister: Amtsgericht Charlottenburg / HRB 112767 > > -- > > Daniel Helgenberger > m box bewegtbild GmbH > > P: +49/30/2408781-22 > F: +49/30/2408781-10 > > ACKERSTR. 19 > D-10115 BERLIN > > > www.m-box.de www.monkeymen.tv > > Geschäftsführer: Martin Retschitzegger / Michaela Göllner > Handeslregister: Amtsgericht Charlottenburg / HRB 112767 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users