Hi,
Is it identical?
In the places we use sync=disabled (e.g. analysis scratch areas),
we're totally content with losing last x seconds/minutes of writes,
and understood that on-disk consistency is not impacted.
Cheers,Dan

On Mon, Nov 12, 2018 at 3:16 PM Kevin Olbrich <[email protected]> wrote:
>
> Hi Dan,
>
> ZFS without sync would be very much identical to ext2/ext4 without journals 
> or XFS with barriers disabled.
> The ARC cache in ZFS is awesome but disbaling sync on ZFS is a very high risk 
> (using ext4 with kvm-mode unsafe would be similar I think).
>
> Also, ZFS only works as expected with scheduler set to noop as it is 
> optimized to consume whole, non-shared devices.
>
> Just my 2 cents ;-)
>
> Kevin
>
>
> Am Mo., 12. Nov. 2018 um 15:08 Uhr schrieb Dan van der Ster 
> <[email protected]>:
>>
>> We've done ZFS on RBD in a VM, exported via NFS, for a couple years.
>> It's very stable and if your use-case permits you can set zfs
>> sync=disabled to get very fast write performance that's tough to beat.
>>
>> But if you're building something new today and have *only* the NAS
>> use-case then it would make better sense to try CephFS first and see
>> if it works for you.
>>
>> -- Dan
>>
>> On Mon, Nov 12, 2018 at 3:01 PM Kevin Olbrich <[email protected]> wrote:
>> >
>> > Hi!
>> >
>> > ZFS won't play nice on ceph. Best would be to mount CephFS directly with 
>> > the ceph-fuse driver on the endpoint.
>> > If you definitely want to put a storage gateway between the data and the 
>> > compute nodes, then go with nfs-ganesha which can export CephFS directly 
>> > without local ("proxy") mount.
>> >
>> > I had such a setup with nfs and switched to mount CephFS directly. If 
>> > using NFS with the same data, you must make sure your HA works well to 
>> > avoid data corruption.
>> > With ceph-fuse you directly connect to the cluster, one component less 
>> > that breaks.
>> >
>> > Kevin
>> >
>> > Am Mo., 12. Nov. 2018 um 12:44 Uhr schrieb Premysl Kouril 
>> > <[email protected]>:
>> >>
>> >> Hi,
>> >>
>> >>
>> >> We are planning to build NAS solution which will be primarily used via 
>> >> NFS and CIFS and workloads ranging from various archival application to 
>> >> more “real-time processing”. The NAS will not be used as a block storage 
>> >> for virtual machines, so the access really will always be file oriented.
>> >>
>> >>
>> >> We are considering primarily two designs and I’d like to kindly ask for 
>> >> any thoughts, views, insights, experiences.
>> >>
>> >>
>> >> Both designs utilize “distributed storage software at some level”. Both 
>> >> designs would be built from commodity servers and should scale as we 
>> >> grow. Both designs involve virtualization for instantiating "access 
>> >> virtual machines" which will be serving the NFS and CIFS protocol - so in 
>> >> this sense the access layer is decoupled from the data layer itself.
>> >>
>> >>
>> >> First design is based on a distributed filesystem like Gluster or CephFS. 
>> >> We would deploy this software on those commodity servers and mount the 
>> >> resultant filesystem on the “access virtual machines” and they would be 
>> >> serving the mounted filesystem via NFS/CIFS.
>> >>
>> >>
>> >> Second design is based on distributed block storage using CEPH. So we 
>> >> would build distributed block storage on those commodity servers, and 
>> >> then, via virtualization (like OpenStack Cinder) we would allocate the 
>> >> block storage into the access VM. Inside the access VM we would deploy 
>> >> ZFS which would aggregate block storage into a single filesystem. And 
>> >> this filesystem would be served via NFS/CIFS from the very same VM.
>> >>
>> >>
>> >> Any advices and insights highly appreciated
>> >>
>> >>
>> >> Cheers,
>> >>
>> >> Prema
>> >>
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> [email protected]
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > [email protected]
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to