Re: [Gluster-users] SMB copies failing with GlusterFS 3.10

2017-11-28 Thread Alastair Neil
What is the volume configuration, is it replicated, distributed,
distribute-replicate, disperse?

have you tried setting:
performance.strict-write-ordering to on?

On 14 November 2017 at 06:24, Brett Randall  wrote:

> Hi all
>
> We've got a brand new 6-node GlusterFS 3.10 deployment (previous 20 nodes
> were GlusterFS 3.6). Running on CentOS 7 using legit repos, so
> glusterfs-3.10.7-1.el7.x86_64 is the base.
>
> Our issue is that when we create a file with a Gluster client, e.g. a Mac
> or Windows machine, it works fine. However if we copy a file from a Mac or
> Windows machine to the Samba share, it fails with a complaint that the file
> already exists, even though it doesn't (or didn't). It appears as though
> the file is tested to see if it exists, and it doesn't, so then the client
> goes and tries to create the file but at that stage it DOES exist, maybe
> something to do with the previous stat? Anyway, it is repeatable and
> killing us! NFS clients work fine on any platform, but SMB does not. There
> aren't that many client-side options for SMB mounts so the solution has to
> be server-side.
>
> Here is a pcap of the copy attempt from one computer:
>
> https://www.dropbox.com/s/yhn3s1qbxtdvnoh/sambacap.pcapng?dl=0
>
> You'll see a the request to look for the file which results in a
> STATUS_OBJECT_NAME_NOT_FOUND (good), followed by a STATUS_SHARING_VIOLATION
> (???) followed by a STATUS_OBJECT_NAME_COLLISION (bad).
>
> Here are the options from the volume:
>
> Options Reconfigured:
>
> nfs.acl: off
>
> features.cache-invalidation: on
>
> storage.batch-fsync-delay-usec: 0
>
> transport.address-family: inet6
>
> nfs.disable: on
>
> performance.stat-prefetch: off
>
> server.allow-insecure: on
>
> ganesha.enable: ganesha-nfs
>
> user.smb: enable
>
> Any thoughts on why samba isn't enjoying our copies?
>
> Thanks!
>
> Brett.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] slow performance with export storage on glusterfs

2017-11-28 Thread Dmitri Chebotarov
Hello

If you use Gluster as FUSE mount it's always slower than you expect it to
be.
If you want to get better performance out of your oVirt/Gluster storage,
try the following:

- create a Linux VM in your oVirt environment, assign 4/8/12 virtual disks
(Virtual disks are located on your Gluster storage volume).
- Boot/configure the VM, then use LVM to create VG/LV with 4 stripes
(lvcreate -i 4) and use all 4/8/12 virtual disks as PVS.
- then install NFS server and export LV you created in previous step, use
the NFS export as export domain in oVirt/RHEV.

You should get wire speed when you use multiple stripes on Gluster storage,
FUSE mount on oVirt host will fan out requests to all 4 servers.
Gluster is very good at distributed/parallel workloads, but when you use
direct Gluster FUSE mount for Export domain you only have one data stream,
which is fragmented even more my multiple writes/reads that Gluster needs
to do to save your data on all member servers.



On Mon, Nov 27, 2017 at 8:41 PM, Donny Davis  wrote:

> What about mounting over nfs instead of the fuse client. Or maybe
> libgfapi. Is that available for export domains
>
> On Fri, Nov 24, 2017 at 3:48 AM Jiří Sléžka  wrote:
>
>> On 11/24/2017 06:41 AM, Sahina Bose wrote:
>> >
>> >
>> > On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka > > > wrote:
>> >
>> > Hi,
>> >
>> > On 11/22/2017 07:30 PM, Nir Soffer wrote:
>> > > On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka > 
>> > > >> wrote:
>> > >
>> > > Hi,
>> > >
>> > > I am trying realize why is exporting of vm to export storage
>> on
>> > > glusterfs such slow.
>> > >
>> > > I am using oVirt and RHV, both instalations on version 4.1.7.
>> > >
>> > > Hosts have dedicated nics for rhevm network - 1gbps, data
>> > storage itself
>> > > is on FC.
>> > >
>> > > GlusterFS cluster lives separate on 4 dedicated hosts. It has
>> > slow disks
>> > > but I can achieve about 200-400mbit throughput in other
>> > applications (we
>> > > are using it for "cold" data, backups mostly).
>> > >
>> > > I am using this glusterfs cluster as backend for export
>> > storage. When I
>> > > am exporting vm I can see only about 60-80mbit throughput.
>> > >
>> > > What could be the bottleneck here?
>> > >
>> > > Could it be qemu-img utility?
>> > >
>> > > vdsm  97739  0.3  0.0 354212 29148 ?S>  0:06
>> > > /usr/bin/qemu-img convert -p -t none -T none -f raw
>> > >
>> >  /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/
>> ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-
>> c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > > -O raw
>> > >
>> >  /rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__
>> export/81094499-a392-4ea2-b081-7c6288fbb636/images/
>> ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > >
>> > > Any idea how to make it work faster or what throughput should
>> I
>> > > expected?
>> > >
>> > >
>> > > gluster storage operations are using fuse mount - so every write:
>> > > - travel to the kernel
>> > > - travel back to the gluster fuse helper process
>> > > - travel to all 3 replicas - replication is done on client side
>> > > - return to kernel when all writes succeeded
>> > > - return to caller
>> > >
>> > > So gluster will never set any speed record.
>> > >
>> > > Additionally, you are copying from raw lv on FC - qemu-img cannot
>> do
>> > > anything
>> > > smart and avoid copying unused clusters. Instead if copies
>> > gigabytes of
>> > > zeros
>> > > from FC.
>> >
>> > ok, it does make sense
>> >
>> > > However 7.5-10 MiB/s sounds too slow.
>> > >
>> > > I would try to test with dd - how much time it takes to copy
>> > > the same image from FC to your gluster storage?
>> > >
>> > > dd
>> > > if=/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/
>> ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-
>> c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > > of=/rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__
>> export/81094499-a392-4ea2-b081-7c6288fbb636/__test__
>> > > bs=8M oflag=direct status=progress
>> >
>> > unfrotunately dd performs the same
>> >
>> > 1778384896 bytes (1.8 GB) copied, 198.565265 s, 9.0 MB/s
>> >
>> >
>> > > If dd can do this faster, please ask on qemu-discuss mailing list:
>> > > https://lists.nongnu.org/mailman/listinfo/qemu-discuss
>> > 
>> > >
>> > > If both give similar results, I think asking 

[Gluster-users] move brick to new location

2017-11-28 Thread Bernhard Dübi
Hello everybody,

we have a number of "replica 3 arbiter 1" or (2 + 1) volumes
because we're running out of space on some volumes I need to optimize
the usage of the physical disks. that means I want to consolidate
volumes with low usage onto the same physical disk. I can do it with
"replace-brick commit force" but that looks a bit drastic to me
because it immediately drops the current brick and rebuilds the new
one from the remaining bricks. Is there a possibility which builds the
new brick in the background and changes config only when it's fully in
sync?

I was thinking about
- dropping arbiter brick => replica 2
- adding a new brick => replica 3
- dropping old brick => replica 2
- re-adding arbiter brick => replica 2 arbiter 1

About 20 years ago, I was managing Vertitas Volume Manager. To move a
sub-disk (= similar to brick) VVM temporarily upgraded the subdisk to
a mirrored volume, synced both sides of the mirror and then downgraded
the construct to the new sub-disk. it was impressive and scary at the
same time but we never had an outage.

BTW: I'm running Gluster 3.8.15
BTW: new storage is ordered but the reseller fucked up and now we have
to wait for the delivery for 2 months

Kind Regards
Bernhard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Upgrading from 3.6.9 to 3.12.x -- intermediate steps required?

2017-11-28 Thread Marius Bergmann
Hi,

I'm running a cluster on 3.6.9 and am planning to upgrade to the latest
3.12.x version. I am able to schedule a downtime for the upgrade.

Do I need to take intermediate steps, i.e. upgrade 3.6 -> 3.8 -> 3.12,
or is upgrading from 3.6.9 -> 3.12.x fine (given the cluster is offline)?

--
Marius



signature.asc
Description: OpenPGP digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users