Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-12 Thread mabi
‐‐‐ Original Message ‐‐‐
On Friday, November 9, 2018 2:11 AM, Ravishankar N  
wrote:

> Please re-create the symlink on node 2 to match how it is in the other
> nodes and launch heal again. Check if this is the case for other entries
> too.
> -Ravi

Please ignore my previous mail, I was looking for a symlink with the GFID of 
node1 or node 3 on my node2 whereas I should have been looking with the GFID of 
node2 of course. I have now found the symlink on node2 pointing to that 
problematic directory and it looks like this:

node2# cd /data/myvol-pro/brick/.glusterfs/d9/ac
node2# ls -la | grep d9ac19
lrwxrwxrwx 1 root root  66 Nov  5 14:12 
d9ac192c-e85e-4402-af10-5551f587ed9a -> 
../../10/ec/10ec1eb1-c854-4ff2-a36c-325681713093/oc_dir

When you say "re-create the symlink", do you mean I should delete the current 
symlink on node2 (d9ac192c-e85e-4402-af10-5551f587ed9a) and re-create it with 
the GFID which is used on my node 1 and node 3 like this?

node2# cd /data/myvol-pro/brick/.glusterfs/d9/ac
node2# rm d9ac192c-e85e-4402-af10-5551f587ed9a
node2# cd /data/myvol-pro/brick/.glusterfs/25/e2
node2# ln -s ../../10/ec/10ec1eb1-c854-4ff2-a36c-325681713093/oc_dir 
25e2616b-4fb6-4b2a-8945-1afc956fff19

Just want to make sure I understood you correctly before doing that. Could you 
please let me know if this is correct?

Thanks
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] duplicate performance.cache-size with different values

2018-11-12 Thread Raghavendra Gowdappa
On Mon, Nov 12, 2018 at 9:36 PM Davide Obbi  wrote:

> Hi,
>
> i have noticed that this option is repeated twice with different values in
> gluster 4.1.5 if you run gluster volume get volname all
>
> performance.cache-size  32MB
> ...
> performance.cache-size  128MB
>

This is a bug with naming the options. Two xlators io-cache and quick-read
have same keys listed in glusterd-volume-set.c. can you file a bug?


>
> is that right?
>
> Regards
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] does your samba work with 4.1.x (centos 7.5)

2018-11-12 Thread Diego Remolina
Hi Anoop,

This is an overview of how to use Central files in Revit:

https://revitpure.com/blog/how-to-use-central-and-local-files-in-revit

Once a central file is created, other folders are also created in the
location of the file, which contains many other files.

[root@ysmha02 vfsgluster]# ls -la
total 385588
drwxrws---.  4 dijuremo Staff  4096 Nov 12 19:03 .
drwxr-xr-x. 21 root root   4096 Nov  7 19:51 ..
drwxrws---.  2 dijuremo Staff  4096 Nov 12 19:05 2017-07-06 CAPE CORAL
CJDR_CENTRAL_R2017_backup
-rw-rw.  1 dijuremo Staff 394825728 Jul 23  2017 2017-07-06 CAPE CORAL
CJDR_CENTRAL_R2017.rvt
drwxrws---.  2 dijuremo Staff  4096 Nov 12 19:03 Revit_temp

So I copied the file 2017 2017-07-06 CAPE CORAL CJDR_CENTRAL_R2017.rvt to
the network share:
\\ysmserver\vfsgluster

When I attempted to create a central file, it failed giving this error
message:

[image: revit-central-creation-error.JPG]

A simple ls -l of the _backup folder shows there is an existing file there
called incrementtable.2108.dat:

[root@ysmha02 vfsgluster]# ls -l 2017-07-06\ CAPE\ CORAL\
CJDR_CENTRAL_R2017_backup/incrementtable.2108.dat
-rw-rw. 1 dijuremo Staff 2357 Nov 12 19:13 2017-07-06 CAPE CORAL
CJDR_CENTRAL_R2017_backup/incrementtable.2108.dat

However, at this point, things are not OK. The file is *not* a central
file. If I hit close and open the file again, then Revit will hang, usually
just go into the usual windows "Not Responding" state. This can last for
several minutes. I just closed the application via End Task after 5 minutes
of waiting. Rather than double clicking on the file from the share, I also
tried opening Revit first, then opening the file from Revit using the Open
dialog. This also hangs the program.

In one occasion, I tried to manually delete the folders (long_name_backup
and Revit_temp) from windows using File Explorer to try and recreate the
central again and then the delete process hung in one file,
preview.1957.dat for almost a minute, but it finally succeeded. This is not
normal behavior.

On the server, I can see this is a rather small file:

[root@ysmha02 2017-07-06 CAPE CORAL CJDR_CENTRAL_R2017_backup]# ls -la
total 11
drwxrws---. 2 dijuremo Staff 4096 Nov 12 19:39 .
drwxrws---. 4 dijuremo Staff 4096 Nov 12 19:03 ..
-rw-rw. 1 dijuremo Staff 2753 Nov 12 19:36 preview.1957.dat

This is the test samba share exported using vfs object = glusterfs:

[vfsgluster]
   path = /vfsgluster
   browseable = yes
   create mask = 660
   directory mask = 770
   write list = @Staff
   kernel share modes = No
   vfs objects = glusterfs
   glusterfs:loglevel = 7
   glusterfs:logfile = /var/log/samba/glusterfs-vfsgluster.log
   glusterfs:volume = export

Full smb.conf
http://termbin.com/y4j0

/var/log/samba/glusterfs-vfsgluster.log
http://termbin.com/5hdr

Please let me know if there is any other information I can provide.

Diego


On Sun, Nov 11, 2018 at 11:41 PM Anoop C S  wrote:

> On Fri, 2018-11-09 at 15:03 -0500, Diego Remolina wrote:
> > Yes it works if you use fuse mount
> >
> > No it does not work well if you use:vfs objects = glusterfs
>
> Would you mind explaining the issue in detail? Apologies if you have
> already raised it here before
> and could not get to resolution.
>
> Please attach the output of `testparm -s` and any relevant error messages
> from logs under
> /var/log/samba/ when glusterfs vfs module is being used.
>
> > The samba comes directly from CentOS repository.
> >
> > Gluster comes from SIG 3.10.12
> >
> > # rpm -qa | grep centos-release-gluster
> > centos-release-gluster310-1.0-1.el7.centos.noarch
> >
> > # yum info samba
> > Loaded plugins: fastestmirror, verify
> > Loading mirror speeds from cached hostfile
> > * base: reflector.westga.edu
> > * extras: reflector.westga.edu
> > * updates: reflector.westga.edu
> > Installed Packages
> > Name: samba
> > Arch: x86_64
> > Version : 4.7.1
> > Release : 6.el7
> > Size: 1.9 M
> > Repo: installed
> > From repo   : base
> > Summary : Server and Client software to interoperate with Windows
> machines
> > URL : http://www.samba.org/
> > License : GPLv3+ and LGPLv3+
> > Description : Samba is the standard Windows interoperability suite of
> > programs for Linux and
> >: Unix.
> >
> > # yum info glusterfs-server
> > Loaded plugins: fastestmirror, verify
> > Loading mirror speeds from cached hostfile
> > * base: reflector.westga.edu
> > * extras: reflector.westga.edu
> > * updates: reflector.westga.edu
> > Installed Packages
> > Name: glusterfs-server
> > Arch: x86_64
> > Version : 3.10.12
> > Release : 1.el7
> > Size: 4.3 M
> > Repo: installed
> > From repo   : centos-gluster310
> > Summary : Distributed file-system server
> > URL : http://gluster.readthedocs.io/en/latest/
> > License : GPLv2 or LGPLv3+
> > Description : GlusterFS is a distributed file-system capable of
> > scaling to several
> >   

Re: [Gluster-users] does your samba work with 4.1.x (centos 7.5)

2018-11-12 Thread lejeczek

On 09/11/2018 15:08, Kaleb S. KEITHLEY wrote:

On 11/9/18 8:12 AM, lejeczek wrote:

hi guys

I presume because 4.1.x has been in EPEL repo it is confirmed and
validated to work 100% with default samba installation.

GlusterFS — any version — is _not_ in EPEL.

However it is in the CentOS Storage SIG.

all glusterfs I see & get is simply from/by installing packages, eg

centos-release-gluster312-1.0-2.el7.centos.noarch

centos-release-gluster41-1.0-3.el7.centos.noarch

These are simply repo packages but still these repos points to centos' 
own mirrors.


To me it seems like all gluster versions are there in Centos, so is Samba.

So I do not tamper with packages, nor I get/use extra third party repos.

Thus my question is valid and correct - anybody would assume that these 
should be tested, specifically Samba against glusterfs, especially 
against 4.1.x


No?

But, I'd prefer to here you guys say you ACTUALLY have your samba work
100% with 4.1.x. Anybody?

Nobody has built Samba in the CentOS Storage SIG with GlusterFS support.
Not for a long time anyway. The last time it was built was over four
years ago — for el6.

It would be great if someone in one of the Gluster, Samba, or CentOS
communities would start building it on a regular basis.

Any volunteers?



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Vlad Kopylov
Good thing about gluster is that you have files as files. Whatever happens
good old file access is still there - if you need backup, or rebuilding
volumes - every replica brick has your files.
As a contrary to object blue..something storage with separate metadata, if
it gets lost/mixed you will be recovering it with magnifying glass...

If you go with monster VM approach - hypervisor uses gfapi which is little
faster then ceph on all simple tests. In really distributed environments
ceph (multiple buildings or datacenters) read performance of ceph will kill
the cluster.
Ceph CPU and Mem consumption will surprise you against Gluster as well.

For local FILE NAS (everything sitting in one room) something like BeeGFS
LizardFS would be a best option.

v


On Mon, Nov 12, 2018 at 6:51 AM Premysl Kouril 
wrote:

> Hi,
>
> We are planning to build NAS solution which will be primarily used via NFS
> and CIFS and workloads ranging from various archival application to more
> “real-time processing”. The NAS will not be used as a block storage for
> virtual machines, so the access really will always be file oriented.
>
> We are considering primarily two designs and I’d like to kindly ask for
> any thoughts, views, insights, experiences.
>
> Both designs utilize “distributed storage software at some level”. Both
> designs would be built from commodity servers and should scale as we grow.
> Both designs involve virtualization for instantiating "access virtual
> machines" which will be serving the NFS and CIFS protocol - so in this
> sense the access layer is decoupled from the data layer itself.
>
> First design is based on a distributed filesystem like Gluster or CephFS.
> We would deploy this software on those commodity servers and mount the
> resultant filesystem on the “access virtual machines” and they would be
> serving the mounted filesystem via NFS/CIFS.
>
> Second design is based on distributed block storage using CEPH. So we
> would build distributed block storage on those commodity servers, and then,
> via virtualization (like OpenStack Cinder) we would allocate the block
> storage into the access VM. Inside the access VM we would deploy ZFS which
> would aggregate block storage into a single filesystem. And this filesystem
> would be served via NFS/CIFS from the very same VM.
>
> Any advices and insights highly appreciated
>
> Cheers,
>
> Prema
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] duplicate performance.cache-size with different values

2018-11-12 Thread Davide Obbi
Hi,

i have noticed that this option is repeated twice with different values in
gluster 4.1.5 if you run gluster volume get volname all

performance.cache-size  32MB
...
performance.cache-size  128MB


is that right?

Regards
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Alex Crow

On 12/11/18 11:51, Premysl Kouril wrote:


Hi,


We are planning to build NAS solution which will be primarily used via 
NFS and CIFS and workloads ranging from various archival application 
to more “real-time processing”. The NAS will not be used as a block 
storage for virtual machines, so the access really will always be file 
oriented.



We are considering primarily two designs and I’d like to kindly ask 
for any thoughts, views, insights, experiences.



Both designs utilize “distributed storage software at some level”. 
Both designs would be built from commodity servers and should scale as 
we grow. Both designs involve virtualization for instantiating "access 
virtual machines" which will be serving the NFS and CIFS protocol - so 
in this sense the access layer is decoupled from the data layer itself.



First design is based on a distributed filesystem like Gluster or 
CephFS. We would deploy this software on those commodity servers and 
mount the resultant filesystem on the “access virtual machines” and 
they would be serving the mounted filesystem via NFS/CIFS.



Second design is based on distributed block storage using CEPH. So we 
would build distributed block storage on those commodity servers, and 
then, via virtualization (like OpenStack Cinder) we would allocate the 
block storage into the access VM. Inside the access VM we would deploy 
ZFS which would aggregate block storage into a single filesystem. And 
this filesystem would be served via NFS/CIFS from the very same VM.



Any advices and insights highly appreciated


Cheers,

Prema




For just NAS, I'd suggest looking at some of the other Distributed File 
System projects such as MooseFS, LizardFS, BeeGFS (open source), 
weka.io, Exablox (proprietary), etc. They are perhaps more suited to a 
general purpose, unstructured NAS use with a mix of file sizes and 
workloads. GlusterFS would work but we found it only gave good enough 
performance on large files (>10MB) and was too slow with directories 
containing more that a thousand or so files.


Cheers

Alex


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Ceph or Gluster for implementing big NAS

2018-11-12 Thread Premysl Kouril
Hi,

We are planning to build NAS solution which will be primarily used via NFS
and CIFS and workloads ranging from various archival application to more
“real-time processing”. The NAS will not be used as a block storage for
virtual machines, so the access really will always be file oriented.

We are considering primarily two designs and I’d like to kindly ask for any
thoughts, views, insights, experiences.

Both designs utilize “distributed storage software at some level”. Both
designs would be built from commodity servers and should scale as we grow.
Both designs involve virtualization for instantiating "access virtual
machines" which will be serving the NFS and CIFS protocol - so in this
sense the access layer is decoupled from the data layer itself.

First design is based on a distributed filesystem like Gluster or CephFS.
We would deploy this software on those commodity servers and mount the
resultant filesystem on the “access virtual machines” and they would be
serving the mounted filesystem via NFS/CIFS.

Second design is based on distributed block storage using CEPH. So we would
build distributed block storage on those commodity servers, and then, via
virtualization (like OpenStack Cinder) we would allocate the block storage
into the access VM. Inside the access VM we would deploy ZFS which would
aggregate block storage into a single filesystem. And this filesystem would
be served via NFS/CIFS from the very same VM.

Any advices and insights highly appreciated

Cheers,

Prema
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-12 Thread mabi
‐‐‐ Original Message ‐‐‐
On Friday, November 9, 2018 2:11 AM, Ravishankar N  
wrote:

> Please re-create the symlink on node 2 to match how it is in the other
> nodes and launch heal again. Check if this is the case for other entries
> too.
> -Ravi

I can't create the missing symlink on node2 because the target 
(../../70/c8/70c894ca-422b-4bce-acf1-5cfb4669abbd/oc_dir) of that link does not 
exist. So basically the symlink and the target of that symlink are missing.

Or shall I create a symlink to a non-existing target?

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users