[Gluster-users] Problems when deleting files

2017-07-31 Thread Federico Sansone
Hi, i'm having some trouble with freeing space, i have a single node
cluster, with a distributed volume of about 200Tb.
It's shared trough smb, and mounted via client locally.

If i delete from the network, using a windows machine accesing the share,
the space shows as free, but when i delete from the linux box, on the
mounted volume, the free space is lost.

It's something i'm missing about the os, or it might be a misconfiguration
or something related to gluster?

Thanks in advance,

Federico
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster volume 3.10.4 hangs

2017-07-31 Thread WK



On 7/31/2017 1:12 AM, Seva Gluschenko wrote:

Hi folks,


I'm running a simple gluster setup with a single volume replicated at 
two servers, as follows:


Volume Name: gv0
Type: Replicate
Volume ID: dd4996c0-04e6-4f9b-a04e-73279c4f112b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp


The problem is, when it happened that one of replica servers hung, it 
caused the whole glusterfs to hang.


Yes, you lost quorum and the system doesn't want you to get a split-brain.

Could you please drop me a hint, is it expected behaviour, or are 
there any tweaks and server or volume settings that might be altered 
to change this? Any help would be appreciated much.




Add a third replica node (or just an arbiter node if you aren't that 
ambitious or want to save on the kit)


That way when you lose a node, the cluster it will pause for 40 seconds 
or so while it figures things out and then continue on.
When the missing node returns, the self-heal will kick in and you will 
be back to 100%.


Your other alternative is to turn off quorum. But that risks 
split-brain. Depending upon your data, that may or may not be a serious 
issue.


-wk


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] RECOMMENDED CONFIGURATIONS - DISPERSED VOLUME

2017-07-31 Thread Alastair Neil
Dmitri the recommendation from redhat is likely because it is recommended
to have the data stripes be a power of two otherwise there is a performance
penalty.

On 31 July 2017 at 14:28, Dmitri Chebotarov <4dim...@gmail.com> wrote:

> Hi
>
> I'm looking for an advise to configure a dispersed volume.
> I have 12 servers and would like to use 10:2 ratio.
>
> Yet RH recommends 8:3 or 8:4 in this case:
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/
> Administration_Guide/chap-Recommended-Configuration_Dispersed.html
>
> My goal is to create 2PT volume, and going with 10:2 vs 8:3/4 saves a few
> bricks. With 10:2 I'll use 312 8TB bricks and with 8:3 it's 396 8TB bricks
> (36 8:3 slices to evenly distribute between all servers/bricks)
>
> As I see it 8:3/4 vs 10:2 gives more data redundancy (3 servers vs 2
> servers can be offline), but is critical with 12 nodes? Nodes are new and
> under warranty, it's unlikely I will lose 3 servers  at the same time (10:2
> goes offline). Or should I follow RH recommended configuration and use
> 8:3/4?
>
> Thank you.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] RECOMMENDED CONFIGURATIONS - DISPERSED VOLUME

2017-07-31 Thread Dmitri Chebotarov
Hi

I'm looking for an advise to configure a dispersed volume.
I have 12 servers and would like to use 10:2 ratio.

Yet RH recommends 8:3 or 8:4 in this case:

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Recommended-Configuration_Dispersed.html

My goal is to create 2PT volume, and going with 10:2 vs 8:3/4 saves a few
bricks. With 10:2 I'll use 312 8TB bricks and with 8:3 it's 396 8TB bricks
(36 8:3 slices to evenly distribute between all servers/bricks)

As I see it 8:3/4 vs 10:2 gives more data redundancy (3 servers vs 2
servers can be offline), but is critical with 12 nodes? Nodes are new and
under warranty, it's unlikely I will lose 3 servers  at the same time (10:2
goes offline). Or should I follow RH recommended configuration and use
8:3/4?

Thank you.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster Summit Deadline Extended - August 15th

2017-07-31 Thread Amye Scavarda
Hi all!
Due to a last minute surge of interest, I'm extending the CfP deadline
for Gluster Summit.
CfP will close August 15th, with speakers to be announced the week of
August 21st.

More time to think of what you want to talk about!

https://www.gluster.org/events/summit2017/ is updated to reflect this as well.
- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster volume 3.10.4 hangs

2017-07-31 Thread Dmitri Chebotarov
Hi

With only two nodes it's recommended to set
cluster.server-quorum-type=server and cluster.server-quorum-ratio=51% (i.e.
more than 50%).

On Mon, Jul 31, 2017 at 4:12 AM, Seva Gluschenko  wrote:

> Hi folks,
>
>
> I'm running a simple gluster setup with a single volume replicated at two
> servers, as follows:
>
> Volume Name: gv0
> Type: Replicate
> Volume ID: dd4996c0-04e6-4f9b-a04e-73279c4f112b
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: sst0:/var/glusterfs
> Brick2: sst2:/var/glusterfs
> Options Reconfigured:
> cluster.self-heal-daemon: enable
> performance.readdir-ahead: on
> nfs.disable: on
> transport.address-family: inet
>
> This volume is used to store data in highload production, and recently I
> faced two major problems that made the whole idea of using gluster quite
> questionnable, so I would like to ask gluster developers and/or call for
> community wisdom in hope that I might be missing something. The problem is,
> when it happened that one of replica servers hung, it caused the whole
> glusterfs to hang. Could you please drop me a hint, is it expected
> behaviour, or are there any tweaks and server or volume settings that might
> be altered to change this? Any help would be appreciated much.
>
>
> --
> Best Regards,
>
> Seva Gluschenko
> CTO @ http://webkontrol.ru
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Elastic Hashing Algorithm

2017-07-31 Thread Moustafa Hammouda
Good Morning!

 

I’m writing my thesis about distributed storage systems and I came across 
GlusterFS.

 

I need more information/details about Elastic Hashing Algorithm which is used 
by GlusterFS. Unfortunately, I couldn’t find any details about it.

 

Please help my with a reference to this Algorithm so I can study it. 

 

--

Best Regards

Moustafa Hammouda

System Engineer

Qatar-MOI

Telecom Department

Tel: +974-66935441

E-mail: mhamo...@qstrs.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Hot Tier

2017-07-31 Thread Dmitri Chebotarov
Hi

At this point I already detached Hot Tier volume to run rebalance. Many
volume settings only take effect for the new data (or rebalance), so I
thought may this was the case with Hot Tier as well. Once rebalance
finishes, I'll re-attache hot tier.

cluster.write-freq-threshold and cluster.read-freq-threshold control number
of times data is read/write before it moved to hot tier. In my case both
are set to '2', I didn't think I needed to disable
performance.io-cache/quick-read as well. Will give it a try.

Here is the volume info (no hot tier at this time)

~]# gluster v info home

Volume Name: home
Type: Disperse
Volume ID: 4583a3cf-4deb-4707-bd0d-e7defcb1c39b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (8 + 4) = 12
Transport-type: tcp
Bricks:
Brick1: MMR01:/rhgs/b0/data
Brick2: MMR02:/rhgs/b0/data
Brick3: MMR03:/rhgs/b0/data
Brick4: MMR04:/rhgs/b0/data
Brick5: MMR05:/rhgs/b0/data
Brick6: MMR06:/rhgs/b0/data
Brick7: MMR07:/rhgs/b0/data
Brick8: MMR08:/rhgs/b0/data
Brick9: MMR09:/rhgs/b0/data
Brick10: MMR10:/rhgs/b0/data
Brick11: MMR11:/rhgs/b0/data
Brick12: MMR12:/rhgs/b0/data
Options Reconfigured:
diagnostics.client-log-level: CRITICAL
cluster.write-freq-threshold: 2
cluster.read-freq-threshold: 2
features.record-counters: on
nfs.disable: on
performance.readdir-ahead: enable
transport.address-family: inet
client.event-threads: 4
server.event-threads: 4
cluster.lookup-optimize: on
cluster.readdir-optimize: on
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
cluster.data-self-heal-algorithm: full
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 5
performance.write-behind-window-size: 1MB
performance.client-io-threads: on
performance.read-ahead: disable
performance.cache-size: 256MB
performance.io-thread-count: 16
performance.strict-o-direct: on
network.ping-timeout: 30
network.remote-dio: disable
user.cifs: off
features.quota: on
features.inode-quota: on
features.quota-deem-statfs: on

~]# gluster v get home  performance.io-cache
performance.io-cacheon

~]# gluster v get home  performance.quick-read
performance.quick-read  on

Thank you.

On Mon, Jul 31, 2017 at 5:16 AM, Hari Gowtham  wrote:

> Hi,
>
> Before you try turning off the perf translators can you send us the
> following,
> So we will make sure that the other things haven't gone wrong.
>
> can you send us the log files for tier (would be better if you attach
> other logs too),
> the version of gluster you are using, the client, and the output for:
> gluster v info
> gluster v get v1 performance.io-cache
> gluster v get v1 performance.quick-read
>
> Do send us this and then we will let you know what should be done,
> as reads should also cause promotion
>
>
> On Mon, Jul 31, 2017 at 2:21 PM, Hari Gowtham  wrote:
> > For the tier daemon to migrate the files for read, few performance
> > translators have to be turned off.
> > By default the performance quick-read and io-cache are turned on. You
> > can turn them off so that
> > the files will be migrated for read.
> >
> > On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham 
> wrote:
> >> Hi,
> >>
> >> If it was just reads then the tier daemon won't migrate the files to
> hot tier.
> >> If you create a file or write to a file that file will be made
> >> available on the hot tier.
> >>
> >>
> >> On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran
> >>  wrote:
> >>> Milind and Hari,
> >>>
> >>> Can you please take a look at this?
> >>>
> >>> Thanks,
> >>> Nithya
> >>>
> >>> On 31 July 2017 at 05:12, Dmitri Chebotarov <4dim...@gmail.com> wrote:
> 
>  Hi
> 
>  I'm looking for an advise on hot tier feature - how can I tell if the
> hot
>  tier is working?
> 
>  I've attached replicated-distributed hot tier to an EC volume.
>  Yet, I don't think it's working, at least I don't see any files
> directly
>  on the bricks (only folder structure). 'Status' command has all 0s
> and 'In
>  progress' for all servers.
> 
>  ~]# gluster volume tier home status
>  Node Promoted files   Demoted filesStatus
>  ---
> -
>  localhost00in
> progress
>  MMR1100in
> progress
>  MMR0800in
> progress
>  MMR0300in
> progress
>  MMR0200in
> progress
>  MMR0700in
> progress
>  MMR0600  

Re: [Gluster-users] gluster-heketi-kubernetes

2017-07-31 Thread Jose A. Rivera
Try adding the following lines to the deploy-heketi-deployment.yaml
and heketi-deployment.yaml files around line 53 (keeping indentation
with other similar lines):

  - name: HEKETI_KUBE_NAMESPACE
value: default

Replace "default" with the namespace you're using for these pods.

Are you using gk-deploy or running the deployment manually? If the
former, and your DaemonSet is running succesfully, rerun it without
the -g flag.

--Jose

On Mon, Jul 31, 2017 at 6:20 AM, Raghavendra Talur  wrote:
> Adding more people to the thread. I am currently not able to analyze the logs.
>
> On Thu, Jul 27, 2017 at 5:58 AM, Bishoy Mikhael  wrote:
>> Hi Talur,
>>
>> I've successfully got Gluster deployed as a DaemonSet using k8s spec file
>> glusterfs-daemonset.json from
>> https://github.com/heketi/heketi/tree/master/extras/kubernetes
>>
>> but then when I try deploying heketi using heketi-deployment.json spec file,
>> I end up with a CrashLoopBackOff pod.
>>
>>
>> # kubectl get pods
>>
>> NAMEREADY STATUS RESTARTS   AGE
>>
>> deploy-heketi-930916695-tq4ks   0/1   CrashLoopBackOff   11 35m
>>
>>
>>
>> kubectl describe gives me the following error:
>>
>>
>> Warning FailedSync Error syncing pod, skipping: failed to "StartContainer"
>> for "deploy-heketi" with CrashLoopBackOff: "Back-off 20s restarting failed
>> container=deploy-heketi
>> pod=deploy-heketi-930916695-qpngx_default(f97e7bdc-6dd5-11e7-b8b5-3cfdfea552a8)"
>>
>>
>>
>> and the pod logs are as follows:
>>
>>
>> # kubectl logs deploy-heketi-930916695-qpngx
>>
>> Heketi v4.0.0-8-g9372c22-release-4
>>
>> [kubeexec] ERROR 2017/07/21 05:37:11
>> /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:125: Namespace
>> must be provided in configuration: File
>> /var/run/secrets/kubernetes.io/serviceaccount/namespace not found
>>
>> [heketi] ERROR 2017/07/21 05:37:11
>> /src/github.com/heketi/heketi/apps/glusterfs/app.go:85: Namespace must be
>> provided in configuration: File
>> /var/run/secrets/kubernetes.io/serviceaccount/namespace not found
>>
>> ERROR: Unable to start application
>>
>>
>>
>> so what am I missing here?
>>
>>
>> On Mon, Jul 24, 2017 at 6:14 AM, Vijay Bellur  wrote:
>>>
>>> Hi Bishoy,
>>>
>>> Adding Talur who can help address your queries on Heketi.
>>>
>>> @wattsteve's github  repo on glusterfs-kubernetes is a bit dated. You can
>>> either refer to gluster/gluster-kubernetes or heketi/heketi for current
>>> documentation and operational procedures.
>>>
>>> Regards,
>>> Vijay
>>>
>>>
>>>
>>> On Fri, Jul 21, 2017 at 2:19 AM, Bishoy Mikhael 
>>> wrote:

 Hi,

 I'm trying to deploy Gluster and Heketi on a Kubernetes cluster
 I'm following the guide at https://github.com/gluster/gluster-kubernetes/
 but the video referenced in the page is showing json files used while the
 git repo has only yaml files, they are quiet similar though, but Gluster is
 a deployment not a DaemonSet.

 I deploy Gluster DaemonSet successfully, but heketi is giving me the
 following error:

 # kubectl logs deploy-heketi-930916695-np4hb

 Heketi v4.0.0-8-g9372c22-release-4

 [kubeexec] ERROR 2017/07/21 06:08:52
 /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:125: Namespace
 must be provided in configuration: File
 /var/run/secrets/kubernetes.io/serviceaccount/namespace not found

 [heketi] ERROR 2017/07/21 06:08:52
 /src/github.com/heketi/heketi/apps/glusterfs/app.go:85: Namespace must be
 provided in configuration: File
 /var/run/secrets/kubernetes.io/serviceaccount/namespace not found

 ERROR: Unable to start application


 What am I doing wrong here?!

 I found more than one source for documentation about how to use Gluster
 as a persistent storage for kubernetes, some of them are:
 https://github.com/heketi/heketi/wiki/Kubernetes-Integration
 https://github.com/wattsteve/glusterfs-kubernetes

 Which one to follow?!

 Also I've created a topology file as noted by one of the documentation,
 but I don't know how to provide it to heketi.

 Regards,
 Bishoy Mikhael

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster-heketi-kubernetes

2017-07-31 Thread Raghavendra Talur
Adding more people to the thread. I am currently not able to analyze the logs.

On Thu, Jul 27, 2017 at 5:58 AM, Bishoy Mikhael  wrote:
> Hi Talur,
>
> I've successfully got Gluster deployed as a DaemonSet using k8s spec file
> glusterfs-daemonset.json from
> https://github.com/heketi/heketi/tree/master/extras/kubernetes
>
> but then when I try deploying heketi using heketi-deployment.json spec file,
> I end up with a CrashLoopBackOff pod.
>
>
> # kubectl get pods
>
> NAMEREADY STATUS RESTARTS   AGE
>
> deploy-heketi-930916695-tq4ks   0/1   CrashLoopBackOff   11 35m
>
>
>
> kubectl describe gives me the following error:
>
>
> Warning FailedSync Error syncing pod, skipping: failed to "StartContainer"
> for "deploy-heketi" with CrashLoopBackOff: "Back-off 20s restarting failed
> container=deploy-heketi
> pod=deploy-heketi-930916695-qpngx_default(f97e7bdc-6dd5-11e7-b8b5-3cfdfea552a8)"
>
>
>
> and the pod logs are as follows:
>
>
> # kubectl logs deploy-heketi-930916695-qpngx
>
> Heketi v4.0.0-8-g9372c22-release-4
>
> [kubeexec] ERROR 2017/07/21 05:37:11
> /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:125: Namespace
> must be provided in configuration: File
> /var/run/secrets/kubernetes.io/serviceaccount/namespace not found
>
> [heketi] ERROR 2017/07/21 05:37:11
> /src/github.com/heketi/heketi/apps/glusterfs/app.go:85: Namespace must be
> provided in configuration: File
> /var/run/secrets/kubernetes.io/serviceaccount/namespace not found
>
> ERROR: Unable to start application
>
>
>
> so what am I missing here?
>
>
> On Mon, Jul 24, 2017 at 6:14 AM, Vijay Bellur  wrote:
>>
>> Hi Bishoy,
>>
>> Adding Talur who can help address your queries on Heketi.
>>
>> @wattsteve's github  repo on glusterfs-kubernetes is a bit dated. You can
>> either refer to gluster/gluster-kubernetes or heketi/heketi for current
>> documentation and operational procedures.
>>
>> Regards,
>> Vijay
>>
>>
>>
>> On Fri, Jul 21, 2017 at 2:19 AM, Bishoy Mikhael 
>> wrote:
>>>
>>> Hi,
>>>
>>> I'm trying to deploy Gluster and Heketi on a Kubernetes cluster
>>> I'm following the guide at https://github.com/gluster/gluster-kubernetes/
>>> but the video referenced in the page is showing json files used while the
>>> git repo has only yaml files, they are quiet similar though, but Gluster is
>>> a deployment not a DaemonSet.
>>>
>>> I deploy Gluster DaemonSet successfully, but heketi is giving me the
>>> following error:
>>>
>>> # kubectl logs deploy-heketi-930916695-np4hb
>>>
>>> Heketi v4.0.0-8-g9372c22-release-4
>>>
>>> [kubeexec] ERROR 2017/07/21 06:08:52
>>> /src/github.com/heketi/heketi/executors/kubeexec/kubeexec.go:125: Namespace
>>> must be provided in configuration: File
>>> /var/run/secrets/kubernetes.io/serviceaccount/namespace not found
>>>
>>> [heketi] ERROR 2017/07/21 06:08:52
>>> /src/github.com/heketi/heketi/apps/glusterfs/app.go:85: Namespace must be
>>> provided in configuration: File
>>> /var/run/secrets/kubernetes.io/serviceaccount/namespace not found
>>>
>>> ERROR: Unable to start application
>>>
>>>
>>> What am I doing wrong here?!
>>>
>>> I found more than one source for documentation about how to use Gluster
>>> as a persistent storage for kubernetes, some of them are:
>>> https://github.com/heketi/heketi/wiki/Kubernetes-Integration
>>> https://github.com/wattsteve/glusterfs-kubernetes
>>>
>>> Which one to follow?!
>>>
>>> Also I've created a topology file as noted by one of the documentation,
>>> but I don't know how to provide it to heketi.
>>>
>>> Regards,
>>> Bishoy Mikhael
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] how far should one go upping gluter version before it can harm samba?

2017-07-31 Thread Anoop C S
On Fri, 2017-07-28 at 16:05 +0100, lejeczek wrote:
> 
> On 27/07/17 14:13, lejeczek wrote:
> > ... or in other words - can samba break (on Centos 7.3) if 
> > one goes with gluster version to high?
> > 
> > hi fellas.
> > 
> > I wonder because I see:
> > 
> > smbd[4088153]:   Unknown gluster ACL version: -847736808
> > smbd[4088153]: [2017/07/27 13:12:54.047332,  0] 
> > ../source3/modules/vfs_glusterfs.c:1365(gluster_to_smb_acl)
> > smbd[4088153]:   Unknown gluster ACL version: 0
> > smbd[4088153]: [2017/07/27 13:12:54.162658,  0] 
> > ../source3/modules/vfs_glusterfs.c:1365(gluster_to_smb_acl)
> > smbd[4088153]:   Unknown gluster ACL version: 1219557840
> > 
> > and probably more. I use 3.8.x. Although Samba seems to 
> > work ok.

What do you mean by 'Samba seems to work ok'? Were you able to figure out when 
these entries were
logged? Any specific operation?

> > thanks.
> > L

What is the Samba version being used? Is it possible for you to share your 
smb.conf with us?

> anybody(ideally from the devel gang)?
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?

2017-07-31 Thread Ravishankar N



On 07/31/2017 02:33 PM, mabi wrote:
Now I understand what you mean the the "-samefile" parameter of 
"find". As requested I have now run the following command on all 3 
nodes with the ouput of all 3 nodes below:


sudo find /data/myvolume/brick -samefile 
/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 
-ls


node1:
84046830 lrwxrwxrwx   1 root root   66 Jul 27 15:43 
/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 
-> ../../fe/c0/fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810/OC_DEFAULT_MODULE


node2:
83946380 lrwxrwxrwx   1 root root   66 Jul 27 15:43 
/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 
-> ../../fe/c0/fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810/OC_DEFAULT_MODULE




arbiternode:
find: 
'/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397': 
No such file or directory



Right, so the file OC_DEFAULT_MODULE is missing in this brick It's 
parent directory has gfid fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810.
Goal is to do a stat of this file from the fuse mount. If you know the 
complete path to this file, good. Otherwise you can use this script [1] 
to find the path to the parent dir corresponding to the gfid 
fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810 like so:
`./gfid-to-dirname.sh  /data/myvolume/brick 
fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810`


[1] 
https://github.com/gluster/glusterfs/blob/master/extras/gfid-to-dirname.sh


Try to stat the file from a new (temporary) fuse mount to avoid any 
caching effects.

-Ravi



Hope that helps.


 Original Message 
Subject: Re: [Gluster-users] Possible stale 
.glusterfs/indices/xattrop file?

Local Time: July 31, 2017 10:55 AM
UTC Time: July 31, 2017 8:55 AM
From: ravishan...@redhat.com
To: mabi 
Gluster Users 




On 07/31/2017 02:00 PM, mabi wrote:

To quickly resume my current situation:

on node2 I have found the following file xattrop/indices file which 
matches the GFID of the "heal info" command (below is there output 
of "ls -lai":


2798404 -- 2 root root 0 Apr 28 22:51 
/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397




As you can see this file has inode number 2798404, so I ran the 
following command on all my nodes (node1, node2 and arbiternode):



...which is what I was saying is incorrect. 2798404 is an XFS inode 
number and is not common to the same file across nodes. So you will 
get different results. Use the -samefile flag I shared earlier.

-Ravi




sudo find /data/myvolume/brick -inum 2798404 -ls

Here below are the results for all 3 nodes:

node1:

2798404   19 -rw-r--r--   2 www-data www-data   32 Jun 19 17:42 
/data/myvolume/brick/.glusterfs/e6/5b/e65b77e2-a4c4-4824-a7bb-58df969ce4b0
2798404   19 -rw-r--r--   2 www-data www-data   32 Jun 19 17:42 
/data/myvolume/brick//fileKey


node2:

27984041 --   2 root root0 Apr 28 22:51 
/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
27984041 --   2 root root0 Apr 28 22:51 
/data/myvolume/brick/.glusterfs/indices/xattrop/xattrop-6fa49ad5-71dd-4ec2-9246-7b302ab92d38


arbirternode:

NOTHING

As you requested I have tried to run on node1 a getfattr on the 
fileKey file by using the following command:


getfattr -m . -d -e hex fileKey

but there is no output. I am not familiar with the getfattr command 
so maybe I am using the wrong parameters, could you help me with that?




 Original Message 
Subject: Re: [Gluster-users] Possible stale 
.glusterfs/indices/xattrop file?

Local Time: July 31, 2017 9:25 AM
UTC Time: July 31, 2017 7:25 AM
From: ravishan...@redhat.com
To: mabi 
Gluster Users 

On 07/31/2017 12:20 PM, mabi wrote:

I did a find on this inode number and I could find the file but 
only on node1 (nothing on node2 and the new arbiternode). Here is 
an ls -lai of the file itself on node1:
Sorry I don't understand, isn't that (XFS) inode number specific to 
node2's brick? If you want to use the same command, maybe you 
should try `find /data/myvolume/brick -samefile 
/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397` 
on all 3 bricks.




-rw-r--r-- 1 www-data www-data   32 Jun 19 17:42 fileKey

As you can see it is a 32 bytes file and as you suggested I ran a 
"stat" on this very same file through a glusterfs mount (using 
fuse) but unfortunately nothing happened. The GFID is still being 
displayed to be healed.  Just in case here is the output of the stat:


  File: ‘fileKey’
  Size: 32Blocks: 1  IO Block: 131072 regular file
Device: 1eh/30d Inode: 12086351742306673840  Links: 1
Access: (0644/-rw-r--r--)  Uid: (   33/www-data) Gid: (   33/www-data)
Access: 2017-06-19 17:42:35.339773495 +0200
Modify: 2017-06-19 17:42:35.343773437 +0200
Change: 2017-06-19 

Re: [Gluster-users] Hot Tier

2017-07-31 Thread Hari Gowtham
Hi,

Before you try turning off the perf translators can you send us the following,
So we will make sure that the other things haven't gone wrong.

can you send us the log files for tier (would be better if you attach
other logs too),
the version of gluster you are using, the client, and the output for:
gluster v info
gluster v get v1 performance.io-cache
gluster v get v1 performance.quick-read

Do send us this and then we will let you know what should be done,
as reads should also cause promotion


On Mon, Jul 31, 2017 at 2:21 PM, Hari Gowtham  wrote:
> For the tier daemon to migrate the files for read, few performance
> translators have to be turned off.
> By default the performance quick-read and io-cache are turned on. You
> can turn them off so that
> the files will be migrated for read.
>
> On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham  wrote:
>> Hi,
>>
>> If it was just reads then the tier daemon won't migrate the files to hot 
>> tier.
>> If you create a file or write to a file that file will be made
>> available on the hot tier.
>>
>>
>> On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran
>>  wrote:
>>> Milind and Hari,
>>>
>>> Can you please take a look at this?
>>>
>>> Thanks,
>>> Nithya
>>>
>>> On 31 July 2017 at 05:12, Dmitri Chebotarov <4dim...@gmail.com> wrote:

 Hi

 I'm looking for an advise on hot tier feature - how can I tell if the hot
 tier is working?

 I've attached replicated-distributed hot tier to an EC volume.
 Yet, I don't think it's working, at least I don't see any files directly
 on the bricks (only folder structure). 'Status' command has all 0s and 'In
 progress' for all servers.

 ~]# gluster volume tier home status
 Node Promoted files   Demoted filesStatus
 ----
 localhost00in progress
 MMR1100in progress
 MMR0800in progress
 MMR0300in progress
 MMR0200in progress
 MMR0700in progress
 MMR0600in progress
 MMR0900in progress
 MMR1200in progress
 MMR1000in progress
 MMR0500in progress
 MMR0400in progress
 Tiering Migration Functionality: home: success


 I have a folder with .yml files (Ansible) on the gluster volume, which as
 I understand is 'cache friendly'.
 No matter how many times I read files, nothing is moved to the hot tier
 bricks.

 Thank you.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> --
>> Regards,
>> Hari Gowtham.
>
>
>
> --
> Regards,
> Hari Gowtham.



-- 
Regards,
Hari Gowtham.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?

2017-07-31 Thread mabi
Now I understand what you mean the the "-samefile" parameter of "find". As 
requested I have now run the following command on all 3 nodes with the ouput of 
all 3 nodes below:
sudo find /data/myvolume/brick -samefile 
/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -ls
node1:
8404683 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43 
/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -> 
../../fe/c0/fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810/OC_DEFAULT_MODULE
node2:
8394638 0 lrwxrwxrwx 1 root root 66 Jul 27 15:43 
/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397 -> 
../../fe/c0/fec0e4f4-38d2-4e2e-b5db-fdc0b9b54810/OC_DEFAULT_MODULE
arbiternode:
find: 
'/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397': 
No such file or directory
Hope that helps.

>  Original Message 
> Subject: Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?
> Local Time: July 31, 2017 10:55 AM
> UTC Time: July 31, 2017 8:55 AM
> From: ravishan...@redhat.com
> To: mabi 
> Gluster Users 
>
> On 07/31/2017 02:00 PM, mabi wrote:
>
>> To quickly resume my current situation:
>> on node2 I have found the following file xattrop/indices file which matches 
>> the GFID of the "heal info" command (below is there output of "ls -lai":
>> 2798404 -- 2 root root 0 Apr 28 22:51 
>> /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
>> As you can see this file has inode number 2798404, so I ran the following 
>> command on all my nodes (node1, node2 and arbiternode):
>
> ...which is what I was saying is incorrect. 2798404 is an XFS inode number 
> and is not common to the same file across nodes. So you will get different 
> results. Use the -samefile flag I shared earlier.
> -Ravi
>
>> sudo find /data/myvolume/brick -inum 2798404 -ls
>> Here below are the results for all 3 nodes:
>> node1:
>> 2798404 19 -rw-r--r-- 2 www-data www-data 32 Jun 19 17:42 
>> /data/myvolume/brick/.glusterfs/e6/5b/e65b77e2-a4c4-4824-a7bb-58df969ce4b0
>> 2798404 19 -rw-r--r-- 2 www-data www-data 32 Jun 19 17:42 
>> /data/myvolume/brick//fileKey
>> node2:
>> 2798404 1 -- 2 root root 0 Apr 28 22:51 
>> /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
>> 2798404 1 -- 2 root root 0 Apr 28 22:51 
>> /data/myvolume/brick/.glusterfs/indices/xattrop/xattrop-6fa49ad5-71dd-4ec2-9246-7b302ab92d38
>> arbirternode:
>> NOTHING
>> As you requested I have tried to run on node1 a getfattr on the fileKey file 
>> by using the following command:
>> getfattr -m . -d -e hex fileKey
>> but there is no output. I am not familiar with the getfattr command so maybe 
>> I am using the wrong parameters, could you help me with that?
>>
>>>  Original Message 
>>> Subject: Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?
>>> Local Time: July 31, 2017 9:25 AM
>>> UTC Time: July 31, 2017 7:25 AM
>>> From: ravishan...@redhat.com
>>> To: mabi [](mailto:m...@protonmail.ch)
>>> Gluster Users 
>>> [](mailto:gluster-users@gluster.org)
>>> On 07/31/2017 12:20 PM, mabi wrote:
>>>
 I did a find on this inode number and I could find the file but only on 
 node1 (nothing on node2 and the new arbiternode). Here is an ls -lai of 
 the file itself on node1:
>>>
>>> Sorry I don't understand, isn't that (XFS) inode number specific to node2's 
>>> brick? If you want to use the same command, maybe you should try `find 
>>> /data/myvolume/brick -samefile 
>>> /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397` 
>>> on all 3 bricks.
>>>
 -rw-r--r-- 1 www-data www-data 32 Jun 19 17:42 fileKey
 As you can see it is a 32 bytes file and as you suggested I ran a "stat" 
 on this very same file through a glusterfs mount (using fuse) but 
 unfortunately nothing happened. The GFID is still being displayed to be 
 healed. Just in case here is the output of the stat:
 File: ‘fileKey’
 Size: 32 Blocks: 1 IO Block: 131072 regular file
 Device: 1eh/30d Inode: 12086351742306673840 Links: 1
 Access: (0644/-rw-r--r--) Uid: ( 33/www-data) Gid: ( 33/www-data)
 Access: 2017-06-19 17:42:35.339773495 +0200
 Modify: 2017-06-19 17:42:35.343773437 +0200
 Change: 2017-06-19 17:42:35.343773437 +0200
 Birth: -
>>>
>>> Is this 'fileKey' on node1 having the same gfid (see getfattr output)? 
>>> Looks like it is missing the hardlink inside .glusterfs folder since the 
>>> link count is only 1.
>>> Thanks,
>>> Ravi
>>>
 What else can I do or try in order to fix this situation?

>  Original Message 
> Subject: Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop 
> file?
> Local Time: July 31, 2017 3:27 AM
> UTC Time: July 31, 2017 1:27 AM
> From: ravishan...@redhat.com
> 

Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?

2017-07-31 Thread Ravishankar N



On 07/31/2017 02:00 PM, mabi wrote:

To quickly resume my current situation:

on node2 I have found the following file xattrop/indices file which 
matches the GFID of the "heal info" command (below is there output of 
"ls -lai":


2798404 -- 2 root root 0 Apr 28 22:51 
/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397




As you can see this file has inode number 2798404, so I ran the 
following command on all my nodes (node1, node2 and arbiternode):



...which is what I was saying is incorrect. 2798404 is an XFS inode 
number and is not common to the same file across nodes. So you will get 
different results. Use the -samefile flag I shared earlier.

-Ravi



sudo find /data/myvolume/brick -inum 2798404 -ls

Here below are the results for all 3 nodes:

node1:

2798404   19 -rw-r--r--   2 www-data www-data   32 Jun 19 17:42 
/data/myvolume/brick/.glusterfs/e6/5b/e65b77e2-a4c4-4824-a7bb-58df969ce4b0
2798404   19 -rw-r--r--   2 www-data www-data   32 Jun 19 17:42 
/data/myvolume/brick//fileKey


node2:

27984041 --   2 root root0 Apr 28 22:51 
/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
27984041 --   2 root root0 Apr 28 22:51 
/data/myvolume/brick/.glusterfs/indices/xattrop/xattrop-6fa49ad5-71dd-4ec2-9246-7b302ab92d38


arbirternode:

NOTHING

As you requested I have tried to run on node1 a getfattr on the 
fileKey file by using the following command:


getfattr -m . -d -e hex fileKey

but there is no output. I am not familiar with the getfattr command so 
maybe I am using the wrong parameters, could you help me with that?




 Original Message 
Subject: Re: [Gluster-users] Possible stale 
.glusterfs/indices/xattrop file?

Local Time: July 31, 2017 9:25 AM
UTC Time: July 31, 2017 7:25 AM
From: ravishan...@redhat.com
To: mabi 
Gluster Users 

On 07/31/2017 12:20 PM, mabi wrote:

I did a find on this inode number and I could find the file but only 
on node1 (nothing on node2 and the new arbiternode). Here is an ls 
-lai of the file itself on node1:
Sorry I don't understand, isn't that (XFS) inode number specific to 
node2's brick? If you want to use the same command, maybe you should 
try `find /data/myvolume/brick -samefile 
/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397` 
on all 3 bricks.




-rw-r--r-- 1 www-data www-data   32 Jun 19 17:42 fileKey

As you can see it is a 32 bytes file and as you suggested I ran a 
"stat" on this very same file through a glusterfs mount (using fuse) 
but unfortunately nothing happened. The GFID is still being 
displayed to be healed.  Just in case here is the output of the stat:


  File: ‘fileKey’
  Size: 32Blocks: 1  IO Block: 131072 regular file
Device: 1eh/30d Inode: 12086351742306673840  Links: 1
Access: (0644/-rw-r--r--)  Uid: (   33/www-data)   Gid: (   33/www-data)
Access: 2017-06-19 17:42:35.339773495 +0200
Modify: 2017-06-19 17:42:35.343773437 +0200
Change: 2017-06-19 17:42:35.343773437 +0200
Birth: -

Is this 'fileKey' on node1 having the same gfid (see getfattr 
output)? Looks like it is missing the hardlink inside .glusterfs 
folder since the link count is only 1.

Thanks,
Ravi


What else can I do or try in order to fix this situation?





 Original Message 
Subject: Re: [Gluster-users] Possible stale 
.glusterfs/indices/xattrop file?

Local Time: July 31, 2017 3:27 AM
UTC Time: July 31, 2017 1:27 AM
From: ravishan...@redhat.com
To: mabi 
Gluster Users 




On 07/30/2017 02:24 PM, mabi wrote:

Hi Ravi,

Thanks for your hints. Below you will find the answer to your 
questions.


First I tried to start the healing process by running:

gluster volume heal myvolume

and then as you suggested watch the output of the glustershd.log 
file but nothing appeared in that log file after running the above 
command. I checked the files which need to be healing using the 
"heal  info" command and it still shows that very same 
GFID on node2 to be healed. So nothing changed here.


The file 
/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 
is only on node2 and not on my nod1 nor on my arbiternode. This 
file seems to be a regular file and not a symlink. Here is the 
output of the stat command on it from my node2:


  File: 
‘/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397’

  Size: 0 Blocks: 1  IO Block: 512 regular empty file
Device: 25h/37d Inode: 2798404 Links: 2


Okay, link count of 2 means there is a hardlink somewhere on the 
brick. Try the find command again. I see that the inode number is 
2798404, not the one you shared in your first mail. Once you find 
the path to the file, do a stat of the file from mount. This should 
create the entry in the other 2 

Re: [Gluster-users] Hot Tier

2017-07-31 Thread Hari Gowtham
For the tier daemon to migrate the files for read, few performance
translators have to be turned off.
By default the performance quick-read and io-cache are turned on. You
can turn them off so that
the files will be migrated for read.

On Mon, Jul 31, 2017 at 11:34 AM, Hari Gowtham  wrote:
> Hi,
>
> If it was just reads then the tier daemon won't migrate the files to hot tier.
> If you create a file or write to a file that file will be made
> available on the hot tier.
>
>
> On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran
>  wrote:
>> Milind and Hari,
>>
>> Can you please take a look at this?
>>
>> Thanks,
>> Nithya
>>
>> On 31 July 2017 at 05:12, Dmitri Chebotarov <4dim...@gmail.com> wrote:
>>>
>>> Hi
>>>
>>> I'm looking for an advise on hot tier feature - how can I tell if the hot
>>> tier is working?
>>>
>>> I've attached replicated-distributed hot tier to an EC volume.
>>> Yet, I don't think it's working, at least I don't see any files directly
>>> on the bricks (only folder structure). 'Status' command has all 0s and 'In
>>> progress' for all servers.
>>>
>>> ~]# gluster volume tier home status
>>> Node Promoted files   Demoted filesStatus
>>> ----
>>> localhost00in progress
>>> MMR1100in progress
>>> MMR0800in progress
>>> MMR0300in progress
>>> MMR0200in progress
>>> MMR0700in progress
>>> MMR0600in progress
>>> MMR0900in progress
>>> MMR1200in progress
>>> MMR1000in progress
>>> MMR0500in progress
>>> MMR0400in progress
>>> Tiering Migration Functionality: home: success
>>>
>>>
>>> I have a folder with .yml files (Ansible) on the gluster volume, which as
>>> I understand is 'cache friendly'.
>>> No matter how many times I read files, nothing is moved to the hot tier
>>> bricks.
>>>
>>> Thank you.
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Regards,
> Hari Gowtham.



-- 
Regards,
Hari Gowtham.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?

2017-07-31 Thread mabi
To quickly resume my current situation:
on node2 I have found the following file xattrop/indices file which matches the 
GFID of the "heal info" command (below is there output of "ls -lai":
2798404 -- 2 root root 0 Apr 28 22:51 
/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
As you can see this file has inode number 2798404, so I ran the following 
command on all my nodes (node1, node2 and arbiternode):
sudo find /data/myvolume/brick -inum 2798404 -ls
Here below are the results for all 3 nodes:
node1:
2798404 19 -rw-r--r-- 2 www-data www-data 32 Jun 19 17:42 
/data/myvolume/brick/.glusterfs/e6/5b/e65b77e2-a4c4-4824-a7bb-58df969ce4b0
2798404 19 -rw-r--r-- 2 www-data www-data 32 Jun 19 17:42 
/data/myvolume/brick//fileKey
node2:
2798404 1 -- 2 root root 0 Apr 28 22:51 
/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
2798404 1 -- 2 root root 0 Apr 28 22:51 
/data/myvolume/brick/.glusterfs/indices/xattrop/xattrop-6fa49ad5-71dd-4ec2-9246-7b302ab92d38
arbirternode:
NOTHING
As you requested I have tried to run on node1 a getfattr on the fileKey file by 
using the following command:
getfattr -m . -d -e hex fileKey
but there is no output. I am not familiar with the getfattr command so maybe I 
am using the wrong parameters, could you help me with that?

>  Original Message 
> Subject: Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?
> Local Time: July 31, 2017 9:25 AM
> UTC Time: July 31, 2017 7:25 AM
> From: ravishan...@redhat.com
> To: mabi 
> Gluster Users 
> On 07/31/2017 12:20 PM, mabi wrote:
>
>> I did a find on this inode number and I could find the file but only on 
>> node1 (nothing on node2 and the new arbiternode). Here is an ls -lai of the 
>> file itself on node1:
>
> Sorry I don't understand, isn't that (XFS) inode number specific to node2's 
> brick? If you want to use the same command, maybe you should try `find 
> /data/myvolume/brick -samefile 
> /data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397` 
> on all 3 bricks.
>
>> -rw-r--r-- 1 www-data www-data 32 Jun 19 17:42 fileKey
>> As you can see it is a 32 bytes file and as you suggested I ran a "stat" on 
>> this very same file through a glusterfs mount (using fuse) but unfortunately 
>> nothing happened. The GFID is still being displayed to be healed. Just in 
>> case here is the output of the stat:
>> File: ‘fileKey’
>> Size: 32 Blocks: 1 IO Block: 131072 regular file
>> Device: 1eh/30d Inode: 12086351742306673840 Links: 1
>> Access: (0644/-rw-r--r--) Uid: ( 33/www-data) Gid: ( 33/www-data)
>> Access: 2017-06-19 17:42:35.339773495 +0200
>> Modify: 2017-06-19 17:42:35.343773437 +0200
>> Change: 2017-06-19 17:42:35.343773437 +0200
>> Birth: -
>
> Is this 'fileKey' on node1 having the same gfid (see getfattr output)? Looks 
> like it is missing the hardlink inside .glusterfs folder since the link count 
> is only 1.
> Thanks,
> Ravi
>
>> What else can I do or try in order to fix this situation?
>>
>>>  Original Message 
>>> Subject: Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?
>>> Local Time: July 31, 2017 3:27 AM
>>> UTC Time: July 31, 2017 1:27 AM
>>> From: ravishan...@redhat.com
>>> To: mabi [](mailto:m...@protonmail.ch)
>>> Gluster Users 
>>> [](mailto:gluster-users@gluster.org)
>>>
>>> On 07/30/2017 02:24 PM, mabi wrote:
>>>
 Hi Ravi,
 Thanks for your hints. Below you will find the answer to your questions.
 First I tried to start the healing process by running:
 gluster volume heal myvolume
 and then as you suggested watch the output of the glustershd.log file but 
 nothing appeared in that log file after running the above command. I 
 checked the files which need to be healing using the "heal  info" 
 command and it still shows that very same GFID on node2 to be healed. So 
 nothing changed here.
 The file 
 /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
  is only on node2 and not on my nod1 nor on my arbiternode. This file 
 seems to be a regular file and not a symlink. Here is the output of the 
 stat command on it from my node2:
 File: 
 ‘/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397’
 Size: 0 Blocks: 1 IO Block: 512 regular empty file
 Device: 25h/37d Inode: 2798404 Links: 2
>>>
>>> Okay, link count of 2 means there is a hardlink somewhere on the brick. Try 
>>> the find command again. I see that the inode number is 2798404, not the one 
>>> you shared in your first mail. Once you find the path to the file, do a 
>>> stat of the file from mount. This should create the entry in the other 2 
>>> bricks and do the heal. But FWIW, this seems to be a zero byte file.
>>> Regards,
>>> Ravi
>>>
 Access: 

[Gluster-users] gluster volume 3.10.4 hangs

2017-07-31 Thread Seva Gluschenko
Hi folks,
I'm running a simple gluster setup with a single volume replicated at two 
servers, as follows:

Volume Name: gv0
Type: Replicate
Volume ID: dd4996c0-04e6-4f9b-a04e-73279c4f112b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: sst0:/var/glusterfs
Brick2: sst2:/var/glusterfs
Options Reconfigured:
cluster.self-heal-daemon: enable
performance.readdir-ahead: on
nfs.disable: on
transport.address-family: inet

This volume is used to store data in highload production, and recently I faced 
two major problems that made the whole idea of using gluster quite 
questionnable, so I would like to ask gluster developers and/or call for 
community wisdom in hope that I might be missing something. The problem is, 
when it happened that one of replica servers hung, it caused the whole 
glusterfs to hang. Could you please drop me a hint, is it expected behaviour, 
or are there any tweaks and server or volume settings that might be altered to 
change this? Any help would be appreciated much.
--
Best Regards,

Seva Gluschenko
CTO @ http://webkontrol.ru (http://webkontrol.ru/)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?

2017-07-31 Thread Ravishankar N

On 07/31/2017 12:20 PM, mabi wrote:
I did a find on this inode number and I could find the file but only 
on node1 (nothing on node2 and the new arbiternode). Here is an ls 
-lai of the file itself on node1:
Sorry I don't understand, isn't that (XFS) inode number specific to 
node2's brick? If you want to use the same command, maybe you should try 
`find /data/myvolume/brick -samefile 
/data/myvolume/brick/.glusterfs/29/e0/29e0d13e-1217-41cc-9bda-1fbbf781c397` 
on all 3 bricks.


-rw-r--r-- 1 www-data www-data   32 Jun 19 17:42 fileKey

As you can see it is a 32 bytes file and as you suggested I ran a 
"stat" on this very same file through a glusterfs mount (using fuse) 
but unfortunately nothing happened. The GFID is still being displayed 
to be healed.  Just in case here is the output of the stat:


  File: ‘fileKey’
  Size: 32Blocks: 1  IO Block: 131072 regular file
Device: 1eh/30d Inode: 12086351742306673840  Links: 1
Access: (0644/-rw-r--r--)  Uid: (   33/www-data)   Gid: ( 33/www-data)
Access: 2017-06-19 17:42:35.339773495 +0200
Modify: 2017-06-19 17:42:35.343773437 +0200
Change: 2017-06-19 17:42:35.343773437 +0200
Birth: -

Is this 'fileKey' on node1 having the same gfid (see getfattr output)? 
Looks like it is missing the hardlink inside .glusterfs folder since the 
link count is only 1.

Thanks,
Ravi

What else can I do or try in order to fix this situation?





 Original Message 
Subject: Re: [Gluster-users] Possible stale 
.glusterfs/indices/xattrop file?

Local Time: July 31, 2017 3:27 AM
UTC Time: July 31, 2017 1:27 AM
From: ravishan...@redhat.com
To: mabi 
Gluster Users 




On 07/30/2017 02:24 PM, mabi wrote:

Hi Ravi,

Thanks for your hints. Below you will find the answer to your questions.

First I tried to start the healing process by running:

gluster volume heal myvolume

and then as you suggested watch the output of the glustershd.log 
file but nothing appeared in that log file after running the above 
command. I checked the files which need to be healing using the 
"heal  info" command and it still shows that very same GFID 
on node2 to be healed. So nothing changed here.


The file 
/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397 
is only on node2 and not on my nod1 nor on my arbiternode. This file 
seems to be a regular file and not a symlink. Here is the output of 
the stat command on it from my node2:


  File: 
‘/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397’

  Size: 0 Blocks: 1  IO Block: 512 regular empty file
Device: 25h/37d Inode: 2798404 Links: 2


Okay, link count of 2 means there is a hardlink somewhere on the 
brick. Try the find command again. I see that the inode number is 
2798404, not the one you shared in your first mail. Once you find the 
path to the file, do a stat of the file from mount. This should 
create the entry in the other 2 bricks and do the heal. But FWIW, 
this seems to be a zero byte file.


Regards,
Ravi


Access: (/--)  Uid: (0/root)   Gid: (0/root)
Access: 2017-04-28 22:51:15.215775269 +0200
Modify: 2017-04-28 22:51:15.215775269 +0200
Change: 2017-07-30 08:39:03.700872312 +0200
Birth: -

I hope this is enough info for a starter, else let me know if you 
need any more info. I would be glad to resolve this weird file which 
needs to be healed but can not.


Best regards,
Mabi




 Original Message 
Subject: Re: [Gluster-users] Possible stale 
.glusterfs/indices/xattrop file?

Local Time: July 30, 2017 3:31 AM
UTC Time: July 30, 2017 1:31 AM
From: ravishan...@redhat.com
To: mabi , Gluster Users 






On 07/29/2017 04:36 PM, mabi wrote:

Hi,

Sorry for mailing again but as mentioned in my previous mail, I 
have added an arbiter node to my replica 2 volume and it seem to 
have gone fine except for the fact that there is one single file 
which needs healing and does not get healed as you can see here 
from the output of a "heal info":


Brick node1.domain.tld:/data/myvolume/brick
Status: Connected
Number of entries: 0

Brick node2.domain.tld:/data/myvolume/brick

Status: Connected
Number of entries: 1

Brick arbiternode.domain.tld:/srv/glusterfs/myvolume/brick
Status: Connected
Number of entries: 0

On my node2 the respective .glusterfs/indices/xattrop directory 
contains two files as you can see below:


ls -lai /data/myvolume/brick/.glusterfs/indices/xattrop
total 76180
 10 drw--- 2 root root 4 Jul 29 12:15 .
  9 drw--- 5 root root 5 Apr 28 22:15 ..
2798404 -- 2 root root 0 Apr 28 22:51 
29e0d13e-1217-41cc-9bda-1fbbf781c397
2798404 -- 2 root root 0 Apr 28 22:51 
xattrop-6fa49ad5-71dd-4ec2-9246-7b302ab92d38




I tried to find the real file on my brick where this xattrop file 
points to using its inode number (command: find 
/data/myvolume/brick/data -inum 

Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?

2017-07-31 Thread mabi
I did a find on this inode number and I could find the file but only on node1 
(nothing on node2 and the new arbiternode). Here is an ls -lai of the file 
itself on node1:
-rw-r--r-- 1 www-data www-data 32 Jun 19 17:42 fileKey
As you can see it is a 32 bytes file and as you suggested I ran a "stat" on 
this very same file through a glusterfs mount (using fuse) but unfortunately 
nothing happened. The GFID is still being displayed to be healed. Just in case 
here is the output of the stat:
File: ‘fileKey’
Size: 32 Blocks: 1 IO Block: 131072 regular file
Device: 1eh/30d Inode: 12086351742306673840 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 33/www-data) Gid: ( 33/www-data)
Access: 2017-06-19 17:42:35.339773495 +0200
Modify: 2017-06-19 17:42:35.343773437 +0200
Change: 2017-06-19 17:42:35.343773437 +0200
Birth: -
What else can I do or try in order to fix this situation?

>  Original Message 
> Subject: Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?
> Local Time: July 31, 2017 3:27 AM
> UTC Time: July 31, 2017 1:27 AM
> From: ravishan...@redhat.com
> To: mabi 
> Gluster Users 
>
> On 07/30/2017 02:24 PM, mabi wrote:
>
>> Hi Ravi,
>> Thanks for your hints. Below you will find the answer to your questions.
>> First I tried to start the healing process by running:
>> gluster volume heal myvolume
>> and then as you suggested watch the output of the glustershd.log file but 
>> nothing appeared in that log file after running the above command. I checked 
>> the files which need to be healing using the "heal  info" command 
>> and it still shows that very same GFID on node2 to be healed. So nothing 
>> changed here.
>> The file 
>> /data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397
>>  is only on node2 and not on my nod1 nor on my arbiternode. This file seems 
>> to be a regular file and not a symlink. Here is the output of the stat 
>> command on it from my node2:
>> File: 
>> ‘/data/myvolume/brick/.glusterfs/indices/xattrop/29e0d13e-1217-41cc-9bda-1fbbf781c397’
>> Size: 0 Blocks: 1 IO Block: 512 regular empty file
>> Device: 25h/37d Inode: 2798404 Links: 2
>
> Okay, link count of 2 means there is a hardlink somewhere on the brick. Try 
> the find command again. I see that the inode number is 2798404, not the one 
> you shared in your first mail. Once you find the path to the file, do a stat 
> of the file from mount. This should create the entry in the other 2 bricks 
> and do the heal. But FWIW, this seems to be a zero byte file.
> Regards,
> Ravi
>
>> Access: (/--) Uid: ( 0/ root) Gid: ( 0/ root)
>> Access: 2017-04-28 22:51:15.215775269 +0200
>> Modify: 2017-04-28 22:51:15.215775269 +0200
>> Change: 2017-07-30 08:39:03.700872312 +0200
>> Birth: -
>> I hope this is enough info for a starter, else let me know if you need any 
>> more info. I would be glad to resolve this weird file which needs to be 
>> healed but can not.
>> Best regards,
>> Mabi
>>
>>>  Original Message 
>>> Subject: Re: [Gluster-users] Possible stale .glusterfs/indices/xattrop file?
>>> Local Time: July 30, 2017 3:31 AM
>>> UTC Time: July 30, 2017 1:31 AM
>>> From: ravishan...@redhat.com
>>> To: mabi [](mailto:m...@protonmail.ch), Gluster Users 
>>> [](mailto:gluster-users@gluster.org)
>>>
>>> On 07/29/2017 04:36 PM, mabi wrote:
>>>
 Hi,
 Sorry for mailing again but as mentioned in my previous mail, I have added 
 an arbiter node to my replica 2 volume and it seem to have gone fine 
 except for the fact that there is one single file which needs healing and 
 does not get healed as you can see here from the output of a "heal info":
 Brick node1.domain.tld:/data/myvolume/brick
 Status: Connected
 Number of entries: 0
 Brick node2.domain.tld:/data/myvolume/brick
 
 Status: Connected
 Number of entries: 1
 Brick arbiternode.domain.tld:/srv/glusterfs/myvolume/brick
 Status: Connected
 Number of entries: 0
 On my node2 the respective .glusterfs/indices/xattrop directory contains 
 two files as you can see below:
 ls -lai /data/myvolume/brick/.glusterfs/indices/xattrop
 total 76180
 10 drw--- 2 root root 4 Jul 29 12:15 .
 9 drw--- 5 root root 5 Apr 28 22:15 ..
 2798404 -- 2 root root 0 Apr 28 22:51 
 29e0d13e-1217-41cc-9bda-1fbbf781c397
 2798404 -- 2 root root 0 Apr 28 22:51 
 xattrop-6fa49ad5-71dd-4ec2-9246-7b302ab92d38
 I tried to find the real file on my brick where this xattrop file points 
 to using its inode number (command: find /data/myvolume/brick/data -inum 
 8394642) but it does not find any associated file.
 So my question here is, is it possible that this is a stale file which 
 just forgot to get deleted from the indices/xattrop file by gluster for 
 some unknown reason? If yes is it safe for me to delete 

Re: [Gluster-users] Hot Tier

2017-07-31 Thread Hari Gowtham
Hi,

If it was just reads then the tier daemon won't migrate the files to hot tier.
If you create a file or write to a file that file will be made
available on the hot tier.


On Mon, Jul 31, 2017 at 11:06 AM, Nithya Balachandran
 wrote:
> Milind and Hari,
>
> Can you please take a look at this?
>
> Thanks,
> Nithya
>
> On 31 July 2017 at 05:12, Dmitri Chebotarov <4dim...@gmail.com> wrote:
>>
>> Hi
>>
>> I'm looking for an advise on hot tier feature - how can I tell if the hot
>> tier is working?
>>
>> I've attached replicated-distributed hot tier to an EC volume.
>> Yet, I don't think it's working, at least I don't see any files directly
>> on the bricks (only folder structure). 'Status' command has all 0s and 'In
>> progress' for all servers.
>>
>> ~]# gluster volume tier home status
>> Node Promoted files   Demoted filesStatus
>> ----
>> localhost00in progress
>> MMR1100in progress
>> MMR0800in progress
>> MMR0300in progress
>> MMR0200in progress
>> MMR0700in progress
>> MMR0600in progress
>> MMR0900in progress
>> MMR1200in progress
>> MMR1000in progress
>> MMR0500in progress
>> MMR0400in progress
>> Tiering Migration Functionality: home: success
>>
>>
>> I have a folder with .yml files (Ansible) on the gluster volume, which as
>> I understand is 'cache friendly'.
>> No matter how many times I read files, nothing is moved to the hot tier
>> bricks.
>>
>> Thank you.
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Regards,
Hari Gowtham.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users