[Gluster-users] Release 3.12: RC0 build is available for testing!

2017-08-09 Thread Shyam Ranganathan

Hi,

3.12 release has been tagged RC0 and the builds are available here [1] 
(signed with [2]).


3.12 comes with a set of new features as listed in the release notes [3].

We welcome any testing feedback on the release.

If you find bugs, we request a bug report for the same at [4]. If it is 
deemed as a blocker add it to the release tracker (or just drop a note 
on the bug itself) [5].


Thanks,
Jiffin and Shyam.

[1] builds avaiable at: 
https://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.12.0rc0/


[2] Signing key for the builds: 
https://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.12.0rc0/rsa.pub


[3] Release notes for 3.12: 
https://github.com/gluster/glusterfs/blob/release-3.12/doc/release-notes/3.12.0.md


[4] File a bug on 3.12: 
https://bugzilla.redhat.com/enter_bug.cgi?version=3.12=GlusterFS


[5] Mark a bug a blocker for 3.12: 
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.12.0


"Releases are made better together"
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Volume hacked

2017-08-09 Thread Arman Khalatyan
check out the syslogs of iptables logs  on ip address access during that
time.
maybe you should move in the future to the centralised logging independent
of vm infrastructure

Am 07.08.2017 2:20 nachm. schrieb :

> > It really depends on the application if locks are used. Most (Linux)
> > applications will use advisory locks. This means that locking is only
> > effective when all participating applications use and honour the locks.
> > If one application uses (advisory) locks, and an other application now,
> > well, then all bets are off.
> >
> > It is also possible to delete files that are in active use. The contens
> > will still be served by the filesystem, but there is no accessible
> > filename anymore. If the VMs using those files are still running, there
> > might be a way to create a new filename for the data. If the VMs have
> > been stopped, and the file-descriptior has been closed, the data will be
> > gone :-/
> >
>
> Oh the data was gone long before I stopped the VM, every binary was
> doing I/O errors when accessed, only whatever was in ram (ssh ..) when
> the disk got suppressed was still working.
>
> I'm a bit surpised they could be deleted, but I imagine qemu through
> libgfapi doesn't really access the file as a whole, maybe just the part
> it needs when it needs it. In any case the gluster logs show clearly
> file descriptor errors from 8h47 AM UTC, which seems to match our first
> monitoring alerts. I assume that's when the deletion happened.
>
> Now I just need to figure out what they used to access the volume, I
> hope it's just NFS since that's the only thing I can think of.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster performance with VM's

2017-08-09 Thread Alexey Zakurin

Hi, community

Please, help me with my trouble.

I have 2 Gluster nodes, with 2 bricks on each.
Configuration:
Node1 brick1 replicated on Node0 brick0
Node0 brick1 replicated on Node1 brick0

Volume Name: gm0
Type: Distributed-Replicate
Volume ID: 5e55f511-8a50-46e4-aa2f-5d4f73c859cf
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gl1:/mnt/brick1/gm0
Brick2: gl0:/mnt/brick0/gm0
Brick3: gl0:/mnt/brick1/gm0
Brick4: gl1:/mnt/brick0/gm0
Options Reconfigured:
cluster.rebal-throttle: aggressive
performance.cache-refresh-timeout: 4
performance.cache-max-file-size: 10MB
performance.client-io-threads: on
diagnostics.client-log-level: WARNING
diagnostics.brick-log-level: WARNING
performance.write-behind-window-size: 4MB
features.scrub: Active
features.bitrot: on
cluster.readdir-optimize: on
server.event-threads: 16
client.event-threads: 16
cluster.lookup-optimize: on
server.allow-insecure: on
performance.read-ahead: disable
performance.readdir-ahead: off
performance.io-thread-count: 64
performance.cache-size: 2GB
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
nfs.disable: on
transport.address-family: inet
cluster.self-heal-daemon: enable
cluster.server-quorum-ratio: 51%

Each brick is software RAID10, that contains 6 disks.

20Gb round-robin bonding between servers and clients - one network.

On storage, I have VM's images.
VM's run on 3 clients, Xen Hypervisor.

One of VM's is the FTP-server, that contains large numbers of archives.

Problem.
When I try to upload on FTP-server large file (50-60Gb), other VM's was 
throttled too much. Sometimes, FS on this VM's can be automatically 
re-mounted in ro-mode.
Network monitor saying, that speed ~40MB/sec. Disk monitor displays the 
same.


I try to start VM with FTP on other server (0) - problem still persist.
Mounting other Gluster node - problem still persist.

Please, help to solve this problem.

--
С уважением, Закурин Алексей Евгеньевич.
Telegram: @Zakurin
Tel: +7 968 455 88 48
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster under the hood

2017-08-09 Thread Ilan Schwarts
Hi,

I am using glusterfs 3.10.3 on my CentOS 7.3 Kernel 3.10.0-514.
I have 2 machines as server nodes on my volume and 1 client machine
CentOS 7.2 with the same kernel.

>From Client:
[root@CentOS7286-64 ~]# rpm -qa *gluster*
glusterfs-api-3.7.9-12.el7.centos.x86_64
glusterfs-libs-3.7.9-12.el7.centos.x86_64
glusterfs-fuse-3.7.9-12.el7.centos.x86_64
glusterfs-client-xlators-3.7.9-12.el7.centos.x86_64
glusterfs-3.7.9-12.el7.centos.x86_64

>From Node1 Server:
Status of volume: volume1
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick L137B-GlusterFS-Node1.L137B-root.com:
/gluster/volume149152 0  Y   28370
Brick L137B-GlusterFS-Node2.L137B-root.com:
/gluster/volume149152 0  Y   16123
Self-heal Daemon on localhost   N/A   N/AY   30618
Self-heal Daemon on L137B-GlusterFS-Node2.L
137B-root.com   N/A   N/AY   17987

Task Status of Volume volume1
--
There are no active volume tasks



In the documentation, they say to mount the glusterFS using the command:
mount -t glusterfs serverNode:share /local/directory

What is going on under the hood when calling this function ? what NFS
is being used ? NFS Kernel ? Ganesha NFS ?
The option: "Volume1.options.nfs.disable: on"
Indicates if gluster exported via kernel-NFS or ganesha NFS ?

When Volume1.options.nfs.disable: off - I *can* use showmount -e Node1
from the client machine, When i set: Volume1.options.nfs.disable: on,
i *can not* longer use "showmount -e .."

When I mount using command:
mount -t glusterfs L137B-GlusterFS-Node2.L137B-root.com:/volume1 /mnt/glusterfs

>From Client machine, the command is stucked and not respnding.
>From Node1 the command is success.
All machines are on the same domain, i disabled the firewall, What am
I missing ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users