Re: [Gluster-users] not support so called “structured data”

2020-04-01 Thread Strahil Nikolov
On April 2, 2020 5:24:39 AM GMT+03:00, "sz_cui...@163.com"  
wrote:
>Document point out:
>Gluster does not support so called “structured data”, meaning live, SQL
>databases. Of course, using Gluster to backup and restore the database
>would be fine.
>
>What? Not,support!
>I had a test to run Oracle database on KVM/Ovirt/Gluster,it works
>well,in fact.
>
>But why docs says not support ? It measn not suggest or not to use ?
>
>
>
>
>
>sz_cui...@163.com

I don't know why this is written ,  but when I checked this doc:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/console_installation_guide/add_database_server_to_rhgs-c
  

it seems pretty legid workload (no matter postgres, mysql, mariadb, 
oracle,hana,etc)   .

The only thing that comes to my mind is that usually DBs are quite valuable and 
thus a 'replica 3' volume or a 'replica 3 arbiter 1' volume should be used and 
a different set of options are needed  (compared  to other workloads).

Best Regards,
Strahil Nikolov




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] not support so called “structured data”

2020-04-01 Thread sz_cui...@163.com
Document point out:
Gluster does not support so called “structured data”, meaning live, SQL 
databases. Of course, using Gluster to backup and restore the database would be 
fine.

What? Not,support!
I had a test to run Oracle database on KVM/Ovirt/Gluster,it works well,in fact.

But why docs says not support ? It measn not suggest or not to use ?





sz_cui...@163.com




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] 回复: Re: Cann't mount NFS,please help!

2020-04-01 Thread sz_cui...@163.com
Ok,I see.

Your answer is very clear!

Thanks!



sz_cui...@163.com
 
发件人: Erik Jacobson
发送时间: 2020-04-02 09:29
收件人: sz_cui...@163.com
抄送: Strahil Nikolov; Erik Jacobson; gluster-users
主题: Re:回复: Re: [Gluster-users] Cann't mount NFS,please help!
> Thanks everyone!
> 
> You mean that: Ganesha is new solution ablout NFS Server function  than gNFS,
> in new version gNFS is not the suggest compoment,
> but,if I want using NFS Server ,I should install and configure Ganesha
> separately, is that ?
 
I would phrase it this way:
- The community is moving to Ganesha to provide NFS services. Ganesha
  supports several storage solutions, including gluster
 
- Therefore, distros and packages tend to disable the gNFS support in
  gluster since they assume people are moving to Ganesha. It would
  otherwise be a competing solutions for NFS.
 
- Some people still prefer gNFS and do not want to use Ganesha yet, and
  those people need to re-build their package in some cases like was
  outlined in the thread. This then provides the necessary libraries and
  config files to run gNFS
 
- gNFS still works well if you build it as far as I have found
 
- For my use, Ganesha crashes with my "not normal" workload and
  so I can't switch to it yet. I worked with the community some but ran
  out of system time and had to drop the thread. I would like to revisit
  so that I can run Ganesha too some day. My work load is very far away
  from typical.
 
Erik
 
 
> 
> 
> 
> ━━━
> sz_cui...@163.com
> 
>  
> From: Strahil Nikolov
> Date: 2020-04-02 00:58
> To: Erik Jacobson; sz_cui...@163.com
> CC: gluster-users
> Subject: Re: [Gluster-users] Cann't mount NFS,please help!
> On April 1, 2020 3:37:35 PM GMT+03:00, Erik Jacobson
>  wrote:
> >If you are like me and cannot yet switch to Ganesha (it doesn't work in
> >our workload yet; I need to get back to working with the community on
> >that...)
> >
> >What I would have expected in the process list was a glusterfs process
> >with
> >"nfs" in the name.
> >
> >here it is from one of my systems:
> >
> >root 57927 1  0 Mar31 ?00:00:00 /usr/sbin/glusterfs -s
> >localhost --volfile-id gluster/nfs -p /var/run/gluster/nfs/nfs.pid -l
> >/var/log/glusterfs/nfs.log -S /var/run/gluster/933ab0ad241fab5f.socket
> >
> >
> >My guess - but you'd have to confirm this with the logs - is your
> >gluster
> >build does not have gnfs built in. Since they wish us to move to
> >Ganesha, it is often off by default. For my own builds, I enable it in
> >the spec file.
> >
> >So you should have this installed:
> >
> >/usr/lib64/glusterfs/7.2/xlator/nfs/server.so
> >
> >If that isn't there, you likely need to adjust your spec file and
> >rebuild.
> >
> >As others mentioned, the suggestion is to use Ganesha if possible,
> >which is a separate project.
> >
> >I hope this helps!
> >
> >PS here is a sniip from the spec file I use, with an erikj comment for
> >what I adjusted:
> >
> ># gnfs
> ># if you wish to compile an rpm with the legacy gNFS server xlator
> ># rpmbuild -ta @PACKAGE_NAME@-@package_vers...@.tar.gz --with gnfs
> >%{?_without_gnfs:%global _with_gnfs --disable-gnfs}
> >
> ># erikj force enable
> >%global _with_gnfs --enable-gnfs
> ># end erikj
> >
> >
> >On Wed, Apr 01, 2020 at 11:57:16AM +0800, sz_cui...@163.com wrote:
> >> 1.The gluster server has set volume option nfs.disable to: off
> >>
> >> Volume Name: gv0
> >> Type: Disperse
> >> Volume ID: 429100e4-f56d-4e28-96d0-ee837386aa84
> >> Status: Started
> >> Snapshot Count: 0
> >> Number of Bricks: 1 x (2 + 1) = 3
> >> Transport-type: tcp
> >> Bricks:
> >> Brick1: gfs1:/brick1/gv0
> >> Brick2: gfs2:/brick1/gv0
> >> Brick3: gfs3:/brick1/gv0
> >> Options Reconfigured:
> >> transport.address-family: inet
> >> storage.fips-mode-rchecksum: on
> >> nfs.disable: off
> >>
> >> 2. The process has start.
> >>
> >> [root@gfs1 ~]# ps -ef | grep glustershd
> >> root   1117  1  0 10:12 ?00:00:00 /usr/sbin/glusterfs
> >-s
> >> localhost --volfile-id shd/gv0 -p
> >/var/run/gluster/shd/gv0/gv0-shd.pid -l /var/
> >> log/glusterfs/glustershd.log -S
> >/var/run/gluster/ca97b99a29c04606.socket
> >> --xlator-option
> >*replicate*.node-uuid=323075ea-2b38-427c-a9aa-70ce18e94208
> >> --process-name glustershd --client-pid=-6
> >>
> >>
> >> 3.But the status of gv0 is not correct,for it's status of NFS Server
> >is not
> >> online.
> >>
> >> [root@gfs1 ~]# gluster volume status gv0
> >> Status of volume: gv0
> >> Gluster process TCP Port  RDMA Port
> >Online  Pid
> >>
> 

Re: [Gluster-users] 回复: Re: Cann't mount NFS,please help!

2020-04-01 Thread Erik Jacobson
> Thanks everyone!
> 
> You mean that: Ganesha is new solution ablout NFS Server function  than gNFS,
> in new version gNFS is not the suggest compoment,
> but,if I want using NFS Server ,I should install and configure Ganesha
> separately, is that ?

I would phrase it this way:
- The community is moving to Ganesha to provide NFS services. Ganesha
  supports several storage solutions, including gluster

- Therefore, distros and packages tend to disable the gNFS support in
  gluster since they assume people are moving to Ganesha. It would
  otherwise be a competing solutions for NFS.

- Some people still prefer gNFS and do not want to use Ganesha yet, and
  those people need to re-build their package in some cases like was
  outlined in the thread. This then provides the necessary libraries and
  config files to run gNFS

- gNFS still works well if you build it as far as I have found

- For my use, Ganesha crashes with my "not normal" workload and
  so I can't switch to it yet. I worked with the community some but ran
  out of system time and had to drop the thread. I would like to revisit
  so that I can run Ganesha too some day. My work load is very far away
  from typical.

Erik


> 
> 
> 
> ━━━
> sz_cui...@163.com
> 
>  
> From: Strahil Nikolov
> Date: 2020-04-02 00:58
> To: Erik Jacobson; sz_cui...@163.com
> CC: gluster-users
> Subject: Re: [Gluster-users] Cann't mount NFS,please help!
> On April 1, 2020 3:37:35 PM GMT+03:00, Erik Jacobson
>  wrote:
> >If you are like me and cannot yet switch to Ganesha (it doesn't work in
> >our workload yet; I need to get back to working with the community on
> >that...)
> >
> >What I would have expected in the process list was a glusterfs process
> >with
> >"nfs" in the name.
> >
> >here it is from one of my systems:
> >
> >root 57927 1  0 Mar31 ?00:00:00 /usr/sbin/glusterfs -s
> >localhost --volfile-id gluster/nfs -p /var/run/gluster/nfs/nfs.pid -l
> >/var/log/glusterfs/nfs.log -S /var/run/gluster/933ab0ad241fab5f.socket
> >
> >
> >My guess - but you'd have to confirm this with the logs - is your
> >gluster
> >build does not have gnfs built in. Since they wish us to move to
> >Ganesha, it is often off by default. For my own builds, I enable it in
> >the spec file.
> >
> >So you should have this installed:
> >
> >/usr/lib64/glusterfs/7.2/xlator/nfs/server.so
> >
> >If that isn't there, you likely need to adjust your spec file and
> >rebuild.
> >
> >As others mentioned, the suggestion is to use Ganesha if possible,
> >which is a separate project.
> >
> >I hope this helps!
> >
> >PS here is a sniip from the spec file I use, with an erikj comment for
> >what I adjusted:
> >
> ># gnfs
> ># if you wish to compile an rpm with the legacy gNFS server xlator
> ># rpmbuild -ta @PACKAGE_NAME@-@package_vers...@.tar.gz --with gnfs
> >%{?_without_gnfs:%global _with_gnfs --disable-gnfs}
> >
> ># erikj force enable
> >%global _with_gnfs --enable-gnfs
> ># end erikj
> >
> >
> >On Wed, Apr 01, 2020 at 11:57:16AM +0800, sz_cui...@163.com wrote:
> >> 1.The gluster server has set volume option nfs.disable to: off
> >>
> >> Volume Name: gv0
> >> Type: Disperse
> >> Volume ID: 429100e4-f56d-4e28-96d0-ee837386aa84
> >> Status: Started
> >> Snapshot Count: 0
> >> Number of Bricks: 1 x (2 + 1) = 3
> >> Transport-type: tcp
> >> Bricks:
> >> Brick1: gfs1:/brick1/gv0
> >> Brick2: gfs2:/brick1/gv0
> >> Brick3: gfs3:/brick1/gv0
> >> Options Reconfigured:
> >> transport.address-family: inet
> >> storage.fips-mode-rchecksum: on
> >> nfs.disable: off
> >>
> >> 2. The process has start.
> >>
> >> [root@gfs1 ~]# ps -ef | grep glustershd
> >> root   1117  1  0 10:12 ?00:00:00 /usr/sbin/glusterfs
> >-s
> >> localhost --volfile-id shd/gv0 -p
> >/var/run/gluster/shd/gv0/gv0-shd.pid -l /var/
> >> log/glusterfs/glustershd.log -S
> >/var/run/gluster/ca97b99a29c04606.socket
> >> --xlator-option
> >*replicate*.node-uuid=323075ea-2b38-427c-a9aa-70ce18e94208
> >> --process-name glustershd --client-pid=-6
> >>
> >>
> >> 3.But the status of gv0 is not correct,for it's status of NFS Server
> >is not
> >> online.
> >>
> >> [root@gfs1 ~]# gluster volume status gv0
> >> Status of volume: gv0
> >> Gluster process TCP Port  RDMA Port
> >Online  Pid
> >>
> >
> 
> --
> >> Brick gfs1:/brick1/gv0  49154 0  Y  
> >   4180
> >> Brick gfs2:/brick1/gv0  49154 0

[Gluster-users] 回复: Re: Cann't mount NFS,please help!

2020-04-01 Thread sz_cui...@163.com
Thanks everyone!

You mean that: Ganesha is new solution ablout NFS Server function  than gNFS, 
in new version gNFS is not the suggest compoment,
but,if I want using NFS Server ,I should install and configure Ganesha 
separately, is that ?





sz_cui...@163.com
 
From: Strahil Nikolov
Date: 2020-04-02 00:58
To: Erik Jacobson; sz_cui...@163.com
CC: gluster-users
Subject: Re: [Gluster-users] Cann't mount NFS,please help!
On April 1, 2020 3:37:35 PM GMT+03:00, Erik Jacobson  
wrote:
>If you are like me and cannot yet switch to Ganesha (it doesn't work in
>our workload yet; I need to get back to working with the community on
>that...)
>
>What I would have expected in the process list was a glusterfs process
>with
>"nfs" in the name.
>
>here it is from one of my systems:
>
>root 57927 1  0 Mar31 ?00:00:00 /usr/sbin/glusterfs -s
>localhost --volfile-id gluster/nfs -p /var/run/gluster/nfs/nfs.pid -l
>/var/log/glusterfs/nfs.log -S /var/run/gluster/933ab0ad241fab5f.socket
>
>
>My guess - but you'd have to confirm this with the logs - is your
>gluster
>build does not have gnfs built in. Since they wish us to move to
>Ganesha, it is often off by default. For my own builds, I enable it in
>the spec file.
>
>So you should have this installed:
>
>/usr/lib64/glusterfs/7.2/xlator/nfs/server.so
>
>If that isn't there, you likely need to adjust your spec file and
>rebuild.
>
>As others mentioned, the suggestion is to use Ganesha if possible,
>which is a separate project.
>
>I hope this helps!
>
>PS here is a sniip from the spec file I use, with an erikj comment for
>what I adjusted:
>
># gnfs
># if you wish to compile an rpm with the legacy gNFS server xlator
># rpmbuild -ta @PACKAGE_NAME@-@package_vers...@.tar.gz --with gnfs
>%{?_without_gnfs:%global _with_gnfs --disable-gnfs}
>
># erikj force enable
>%global _with_gnfs --enable-gnfs
># end erikj
>
>
>On Wed, Apr 01, 2020 at 11:57:16AM +0800, sz_cui...@163.com wrote:
>> 1.The gluster server has set volume option nfs.disable to: off
>> 
>> Volume Name: gv0
>> Type: Disperse
>> Volume ID: 429100e4-f56d-4e28-96d0-ee837386aa84
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gfs1:/brick1/gv0
>> Brick2: gfs2:/brick1/gv0
>> Brick3: gfs3:/brick1/gv0
>> Options Reconfigured:
>> transport.address-family: inet
>> storage.fips-mode-rchecksum: on
>> nfs.disable: off
>> 
>> 2. The process has start.
>> 
>> [root@gfs1 ~]# ps -ef | grep glustershd
>> root   1117  1  0 10:12 ?00:00:00 /usr/sbin/glusterfs
>-s
>> localhost --volfile-id shd/gv0 -p
>/var/run/gluster/shd/gv0/gv0-shd.pid -l /var/
>> log/glusterfs/glustershd.log -S
>/var/run/gluster/ca97b99a29c04606.socket
>> --xlator-option
>*replicate*.node-uuid=323075ea-2b38-427c-a9aa-70ce18e94208
>> --process-name glustershd --client-pid=-6
>> 
>> 
>> 3.But the status of gv0 is not correct,for it's status of NFS Server
>is not
>> online.
>> 
>> [root@gfs1 ~]# gluster volume status gv0
>> Status of volume: gv0
>> Gluster process TCP Port  RDMA Port 
>Online  Pid
>>
>--
>> Brick gfs1:/brick1/gv0  49154 0  Y   
>   4180
>> Brick gfs2:/brick1/gv0  49154 0  Y   
>   1222
>> Brick gfs3:/brick1/gv0  49154 0  Y   
>   1216
>> Self-heal Daemon on localhost   N/A   N/AY   
>   1117
>> NFS Server on localhost N/A   N/AN   
>   N/A
>> Self-heal Daemon on gfs2N/A   N/AY   
>   1138
>> NFS Server on gfs2  N/A   N/AN   
>   N/A
>> Self-heal Daemon on gfs3N/A   N/AY   
>   1131
>> NFS Server on gfs3  N/A   N/AN   
>   N/A
>> 
>> Task Status of Volume gv0
>>
>--
>> There are no active volume tasks
>> 
>> 4.So, I cann't mount the gv0 on my client.
>> 
>> [root@kvms1 ~]# mount -t nfs  gfs1:/gv0 /mnt/test
>> mount.nfs: Connection refused
>> 
>> 
>> Please Help!
>> Thanks!
>> 
>> 
>> 
>> 
>> 
>>
>━━━
>> sz_cui...@163.com
>
>> 
>> 
>> 
>> 
>> Community Meeting Calendar:
>> 
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968 
>> 
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users 
>
>
>
>Erik Jacobson
>Software Engineer
>
>erik.jacob...@hpe.com
>+1 612 851 0550 Office
>
>Eagan, MN
>hpe.com
>
>
>
>
>Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>Bridge: https://bluejeans.com/441850968
>
>Gluster-users mailing list

[Gluster-users] GlusterFS geo-replication progress question

2020-04-01 Thread Alexander Iliev

Hi all,

I have a running geo-replication session between two clusters and I'm 
trying to figure out what is the current progress of the replication and 
possibly how much longer it will take.


It has been running for quite a while now (> 1 month), but the thing is 
that both the hardware of the nodes and the link between the two 
clusters aren't that great (e.g., the volumes are backed by rotating 
disks) and the volume is somewhat sizeable (30-ish TB) and given these 
details I'm not really sure how long it is supposed to take normally.


I have several bricks in the volume (same brick size and physical layout 
in both clusters) that are now showing up with a Changelog Crawl status 
and with a recent LAST_SYNCED date in the `gluster colume 
geo-replication status detail` command output which seems to be the 
desired state for all bricks. The rest of the bricks though are in 
Hybrid Crawl state and have been in that state forever.


So I suppose my questions are - how can I tell if the replication 
session is somehow broken and if it's not, then is there are way for me 
to find out the progress and the ETA of the replication?


In /var/log/glusterfs/geo-replication/$session_dir/gsyncd.log there are 
some errors like:


[2020-03-31 11:48:47.81269] E [syncdutils(worker 
/data/gfs/store1/8/brick):822:errlog] Popen: command returned error 
cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto 
-S /tmp/gsync
d-aux-ssh-6aDWmc/206c4b2c3eb782ea2cf49ab5142bd68b.sock x.x.x.x 
/nonexistent/gsyncd slave  x.x.x.x:: --master-node x.x.x.x 
--master-node-id 9476b8bb-d7ee-489a-b083-875805343e67 --master-brick 
 --local-node x.x.x.x
2 --local-node-id 426b564d-35d9-4291-980e-795903e9a386 --slave-timeout 
120 --slave-log-level INFO --slave-gluster-log-level INFO 
--slave-gluster-command-dir /usr/sbinerror=1
[2020-03-31 11:48:47.81617] E [syncdutils(worker 
):826:logerr] Popen: ssh> failed with ValueError.
[2020-03-31 11:48:47.390397] I [repce(agent 
):97:service_loop] RepceServer: terminating on reaching EOF.


In the brick logs I see stuff like:

[2020-03-29 07:49:05.338947] E [fuse-bridge.c:4167:fuse_xattr_cbk] 
0-glusterfs-fuse: extended attribute not supported by the backend storage


I don't know if these are critical, from the rest of the logs it looks 
like data is traveling between the clusters.


Any help will be greatly appreciated. Thank you in advance!

Best regards,
--
alexander iliev




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster Volume Rebalance inode modify change times

2020-04-01 Thread Matthew Benstead

Hello,

I have a question about volume rebalancing and modify/change timestamps. 
We're running Gluster 5.11 on CentOS 7.


We recently added an 8th node to our 7 node distrubute cluster. We ran 
the necessary fix-layout and rebalance commands after adding the new 
brick, and the storage usage balanced out as expected.


However, we had some unexpected behavior from our backup clients. We use 
Tivoli Storage Manager (TSM) to backup this volume to tape, and we're 
backing up from the volume mountpoint.


We basically saw a large number of files and directories (around the 
number that got moved in the rebalance) get re-backed up despite the 
files not changing... This pushed our backup footprint up nearly 40TB.


When investigating some of the files we saw that the Modify date hadn't 
changed, but the "Change" time had. For directories the Modify and 
Change dates were updated. This caused the backup client to think the 
files had changed... See below:


[root@gluster01 ~]# stat 
/storage/data/projects/comp_support/rat/data/basemaps/bc_16_0_0_0.png
  File: 
‘/storage/data/projects/comp_support/rat/data/basemaps/bc_16_0_0_0.png’

  Size: 2587      Blocks: 6  IO Block: 131072 regular file
Device: 29h/41d    Inode: 13389859243885309381  Links: 1
Access: (0664/-rw-rw-r--)  Uid: (69618/bveerman)   Gid: ( 50/ ftp)
Context: system_u:object_r:fusefs_t:s0
Access: 2020-03-30 10:32:32.326169725 -0700
Modify: 2014-11-24 17:16:57.0 -0800
Change: 2020-03-13 21:52:41.158610077 -0700
 Birth: -

[root@gluster01 ~]# stat 
/storage/data/projects/comp_support/rat/data/basemaps

  File: ‘/storage/data/projects/comp_support/rat/data/basemaps’
  Size: 4096      Blocks: 8  IO Block: 131072 directory
Device: 29h/41d    Inode: 13774747307766344103  Links: 2
Access: (2775/drwxrwsr-x)  Uid: (69618/bveerman)   Gid: ( 50/ ftp)
Context: system_u:object_r:fusefs_t:s0
Access: 2020-04-01 03:20:58.644695834 -0700
Modify: 2020-03-14 00:51:31.120718996 -0700
Change: 2020-03-14 00:51:31.384725500 -0700
 Birth: -


If we look at the files in TSM we find that they were backed up because 
the Inode changed. Is this expected for rebalancing? Or is there 
something else going on here?



   Size    Backup Date    Mgmt Class   
A/I File

       --- --   --- 
 2,587  B  2020-03-16 12:14:11 DEFAULT  A 
/storage/data/projects/comp_support/rat/data/basemaps/bc_16_0_0_0.png
 Modified: 2014-11-24 17:16:57  Accessed: 2020-03-13 21:52:41  
Inode changed: 2020-03-13 21:52:41
 Compression Type: None  Encryption Type: None  
Client-deduplicated: NO  Migrated: NO  Inode#: 809741765
  Media Class: Library  Volume ID: 0375  Restore Order: 
-3684--0046F92D
 2,587  B  2019-10-18 17:01:22 DEFAULT  I 
/storage/data/projects/comp_support/rat/data/basemaps/bc_16_0_0_0.png
 Modified: 2014-11-24 17:16:57  Accessed: 2019-08-08 00:22:50  
Inode changed: 2019-08-07 10:55:21
 Compression Type: None  Encryption Type: None  
Client-deduplicated: NO  Migrated: NO  Inode#: 809741765
  Media Class: Library  Volume ID: 33040  Restore Order: 
-D9EB--000890E3




Volume details:


[root@gluster01 ~]# df -h /storage/
Filesystem    Size  Used Avail Use% Mounted on
10.0.231.50:/storage  291T  210T   82T  72% /storage

[root@gluster01 ~]# gluster --version
glusterfs 5.11
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

[root@gluster01 ~]# cat /proc/mounts | egrep "/storage|raid6-storage"
/dev/sda1 /mnt/raid6-storage xfs 
rw,seclabel,relatime,attr2,inode64,noquota 0 0
10.0.231.50:/storage /storage fuse.glusterfs 
rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 
0 0


[root@gluster01 ~]# cat /etc/fstab | egrep "/storage|raid6-storage"
UUID=104f089e-6171-4750-a592-d41759c67f0c    /mnt/raid6-storage xfs 
defaults 0 0
10.0.231.50:/storage /storage glusterfs 
defaults,log-level=WARNING,backupvolfile-server=10.0.231.51 0 0


[root@gluster01 ~]# gluster volume info storage

Volume Name: storage
Type: Distribute
Volume ID: 6f95525a-94d7-4174-bac4-e1a18fe010a2
Status: Started
Snapshot Count: 0
Number of Bricks: 8
Transport-type: tcp
Bricks:
Brick1: 10.0.231.50:/mnt/raid6-storage/storage
Brick2: 10.0.231.51:/mnt/raid6-storage/storage
Brick3: 10.0.231.52:/mnt/raid6-storage/storage
Brick4: 10.0.231.53:/mnt/raid6-storage/storage
Brick5: 10.0.231.54:/mnt/raid6-storage/storage
Brick6: 10.0.231.55:/mnt/raid6-storage/storage
Brick7: 10.0.231.56:/mnt/raid6-storage/storage
Brick8: 

Re: [Gluster-users] Cann't mount NFS,please help!

2020-04-01 Thread Strahil Nikolov
On April 1, 2020 3:37:35 PM GMT+03:00, Erik Jacobson  
wrote:
>If you are like me and cannot yet switch to Ganesha (it doesn't work in
>our workload yet; I need to get back to working with the community on
>that...)
>
>What I would have expected in the process list was a glusterfs process
>with
>"nfs" in the name.
>
>here it is from one of my systems:
>
>root 57927 1  0 Mar31 ?00:00:00 /usr/sbin/glusterfs -s
>localhost --volfile-id gluster/nfs -p /var/run/gluster/nfs/nfs.pid -l
>/var/log/glusterfs/nfs.log -S /var/run/gluster/933ab0ad241fab5f.socket
>
>
>My guess - but you'd have to confirm this with the logs - is your
>gluster
>build does not have gnfs built in. Since they wish us to move to
>Ganesha, it is often off by default. For my own builds, I enable it in
>the spec file.
>
>So you should have this installed:
>
>/usr/lib64/glusterfs/7.2/xlator/nfs/server.so
>
>If that isn't there, you likely need to adjust your spec file and
>rebuild.
>
>As others mentioned, the suggestion is to use Ganesha if possible,
>which is a separate project.
>
>I hope this helps!
>
>PS here is a sniip from the spec file I use, with an erikj comment for
>what I adjusted:
>
># gnfs
># if you wish to compile an rpm with the legacy gNFS server xlator
># rpmbuild -ta @PACKAGE_NAME@-@package_vers...@.tar.gz --with gnfs
>%{?_without_gnfs:%global _with_gnfs --disable-gnfs}
>
># erikj force enable
>%global _with_gnfs --enable-gnfs
># end erikj
>
>
>On Wed, Apr 01, 2020 at 11:57:16AM +0800, sz_cui...@163.com wrote:
>> 1.The gluster server has set volume option nfs.disable to: off
>> 
>> Volume Name: gv0
>> Type: Disperse
>> Volume ID: 429100e4-f56d-4e28-96d0-ee837386aa84
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gfs1:/brick1/gv0
>> Brick2: gfs2:/brick1/gv0
>> Brick3: gfs3:/brick1/gv0
>> Options Reconfigured:
>> transport.address-family: inet
>> storage.fips-mode-rchecksum: on
>> nfs.disable: off
>> 
>> 2. The process has start.
>> 
>> [root@gfs1 ~]# ps -ef | grep glustershd
>> root   1117  1  0 10:12 ?00:00:00 /usr/sbin/glusterfs
>-s
>> localhost --volfile-id shd/gv0 -p
>/var/run/gluster/shd/gv0/gv0-shd.pid -l /var/
>> log/glusterfs/glustershd.log -S
>/var/run/gluster/ca97b99a29c04606.socket
>> --xlator-option
>*replicate*.node-uuid=323075ea-2b38-427c-a9aa-70ce18e94208
>> --process-name glustershd --client-pid=-6
>> 
>> 
>> 3.But the status of gv0 is not correct,for it's status of NFS Server
>is not
>> online.
>> 
>> [root@gfs1 ~]# gluster volume status gv0
>> Status of volume: gv0
>> Gluster process TCP Port  RDMA Port 
>Online  Pid
>>
>--
>> Brick gfs1:/brick1/gv0  49154 0  Y   
>   4180
>> Brick gfs2:/brick1/gv0  49154 0  Y   
>   1222
>> Brick gfs3:/brick1/gv0  49154 0  Y   
>   1216
>> Self-heal Daemon on localhost   N/A   N/AY   
>   1117
>> NFS Server on localhost N/A   N/AN   
>   N/A
>> Self-heal Daemon on gfs2N/A   N/AY   
>   1138
>> NFS Server on gfs2  N/A   N/AN   
>   N/A
>> Self-heal Daemon on gfs3N/A   N/AY   
>   1131
>> NFS Server on gfs3  N/A   N/AN   
>   N/A
>> 
>> Task Status of Volume gv0
>>
>--
>> There are no active volume tasks
>> 
>> 4.So, I cann't mount the gv0 on my client.
>> 
>> [root@kvms1 ~]# mount -t nfs  gfs1:/gv0 /mnt/test
>> mount.nfs: Connection refused
>> 
>> 
>> Please Help!
>> Thanks!
>> 
>> 
>> 
>> 
>> 
>>
>━━━
>> sz_cui...@163.com
>
>> 
>> 
>> 
>> 
>> Community Meeting Calendar:
>> 
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968 
>> 
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users 
>
>
>
>Erik Jacobson
>Software Engineer
>
>erik.jacob...@hpe.com
>+1 612 851 0550 Office
>
>Eagan, MN
>hpe.com
>
>
>
>
>Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>Bridge: https://bluejeans.com/441850968
>
>Gluster-users mailing list
>Gluster-users@gluster.org
>https://lists.gluster.org/mailman/listinfo/gluster-users

Helll All,


As far as  I know, most distributions (at least CentOS does) provide  their 
binaries with gNFS disabled.
Most probably you need  to rebuild.

You can use Ganesha - it ises libgfapi to connect to the pool.

Best Regards,
Strahil Nikolov




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: 

[Gluster-users] Sharding on 7.4 - filesizes may be wrong

2020-04-01 Thread Claus Jeppesen
We're using GlusterFS in a replicated brick setup with 2 bricks with
sharding turned on (shardsize 128MB).

There is something funny going on as we can see that if we copy large VM
files to the volume we can end up with files that are a bit larger than the
source files DEPENDING on the speed with which we copied the files - e.g.:

   dd if=SOURCE bs=1M | pv -L NNm | ssh gluster_server "dd
of=/gluster/VOL_NAME/TARGET bs=1M"

It seems that if NN is <= 25 (i.e. 25 MB/s) the size of SOURCE and TARGET
will be the same.

If we crank NN to, say, 50 we sometimes risk that a 25G file ends up having
a slightly larger size, e.g. 26844413952 or 26844233728 - larger than the
expected 26843545600.
Unfortunately this is not an illusion ! If we dd the files out of Gluster
we will receive the amount of data that 'ls' showed us.

In the brick directory (incl .shard directory) we have the expected amount
of shards for a 25G files (200) with size precisely equal to 128MB - but
there is an additional 0 size shard file created.

Has anyone else seen a phenomenon like this ?

Thanx,

Claus.

-- 
*Claus Jeppesen*
Manager, Network Services
Datto, Inc.
p +45 6170 5901 | Copenhagen Office
www.datto.com




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] fuse Stale file handle error

2020-04-01 Thread Eli V
Have a directory in a weird state on a Distributed-Replicate, server
is Gluster 7.3, client is the fuse client 6.6. Script did a mkdir then
tried to mv a file into the new dir, which failed. The ls -l of it
from the fuse client gives the stale file hande error and the weird
listing:

d? ? ? ? ?? orig

Looks like from the bricks themselves the directory exists and looks
normal. So what's the proper way to remove this bad directory? Just
rmdir on all the bricks directly?




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Cann't mount NFS,please help!

2020-04-01 Thread Erik Jacobson
If you are like me and cannot yet switch to Ganesha (it doesn't work in
our workload yet; I need to get back to working with the community on
that...)

What I would have expected in the process list was a glusterfs process with
"nfs" in the name.

here it is from one of my systems:

root 57927 1  0 Mar31 ?00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/run/gluster/nfs/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /var/run/gluster/933ab0ad241fab5f.socket


My guess - but you'd have to confirm this with the logs - is your gluster
build does not have gnfs built in. Since they wish us to move to
Ganesha, it is often off by default. For my own builds, I enable it in
the spec file.

So you should have this installed:

/usr/lib64/glusterfs/7.2/xlator/nfs/server.so

If that isn't there, you likely need to adjust your spec file and
rebuild.

As others mentioned, the suggestion is to use Ganesha if possible,
which is a separate project.

I hope this helps!

PS here is a sniip from the spec file I use, with an erikj comment for
what I adjusted:

# gnfs
# if you wish to compile an rpm with the legacy gNFS server xlator
# rpmbuild -ta @PACKAGE_NAME@-@package_vers...@.tar.gz --with gnfs
%{?_without_gnfs:%global _with_gnfs --disable-gnfs}

# erikj force enable
%global _with_gnfs --enable-gnfs
# end erikj


On Wed, Apr 01, 2020 at 11:57:16AM +0800, sz_cui...@163.com wrote:
> 1.The gluster server has set volume option nfs.disable to: off
> 
> Volume Name: gv0
> Type: Disperse
> Volume ID: 429100e4-f56d-4e28-96d0-ee837386aa84
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gfs1:/brick1/gv0
> Brick2: gfs2:/brick1/gv0
> Brick3: gfs3:/brick1/gv0
> Options Reconfigured:
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> nfs.disable: off
> 
> 2. The process has start.
> 
> [root@gfs1 ~]# ps -ef | grep glustershd
> root   1117  1  0 10:12 ?00:00:00 /usr/sbin/glusterfs -s
> localhost --volfile-id shd/gv0 -p /var/run/gluster/shd/gv0/gv0-shd.pid -l 
> /var/
> log/glusterfs/glustershd.log -S /var/run/gluster/ca97b99a29c04606.socket
> --xlator-option *replicate*.node-uuid=323075ea-2b38-427c-a9aa-70ce18e94208
> --process-name glustershd --client-pid=-6
> 
> 
> 3.But the status of gv0 is not correct,for it's status of NFS Server is not
> online.
> 
> [root@gfs1 ~]# gluster volume status gv0
> Status of volume: gv0
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick gfs1:/brick1/gv0  49154 0  Y   4180
> Brick gfs2:/brick1/gv0  49154 0  Y   1222
> Brick gfs3:/brick1/gv0  49154 0  Y   1216
> Self-heal Daemon on localhost   N/A   N/AY   1117
> NFS Server on localhost N/A   N/AN   N/A
> Self-heal Daemon on gfs2N/A   N/AY   1138
> NFS Server on gfs2  N/A   N/AN   N/A
> Self-heal Daemon on gfs3N/A   N/AY   1131
> NFS Server on gfs3  N/A   N/AN   N/A
> 
> Task Status of Volume gv0
> --
> There are no active volume tasks
> 
> 4.So, I cann't mount the gv0 on my client.
> 
> [root@kvms1 ~]# mount -t nfs  gfs1:/gv0 /mnt/test
> mount.nfs: Connection refused
> 
> 
> Please Help!
> Thanks!
> 
> 
> 
> 
> 
> ━━━
> sz_cui...@163.com

> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968 
> 
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users 



Erik Jacobson
Software Engineer

erik.jacob...@hpe.com
+1 612 851 0550 Office

Eagan, MN
hpe.com




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster 6.8: some error messages during op-version-update

2020-04-01 Thread Hu Bert
Hi,

i just upgraded a test cluster from version 5.12 to 6.8; that went
fine, but iirc after setting the new op-version i saw some error
messages:

3 servers: becquerel, dirac, tesla
2 volumes:
workload, mounted on /shared/public
persistent, mounted on /shared/private

server becquerel, volume persistent:

[2020-04-01 08:36:29.029953] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.317342] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.341508] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.402862] E [MSGID: 101002]
[graph.y:134:new_volume] 0-parser: new volume
(persistent-write-behind) definition in line 308 unexpected
[2020-04-01 08:36:29.402924] E [MSGID: 101098]
[xlator.c:938:xlator_tree_free_members] 0-parser: Translator tree not
found
[2020-04-01 08:36:29.402945] E [MSGID: 101098]
[xlator.c:959:xlator_tree_free_memacct] 0-parser: Translator tree not
found
[2020-04-01 08:36:29.407428] E [MSGID: 101019]
[graph.y:352:graphyyerror] 0-parser: line 309: duplicate 'type'
defined for volume 'xlator_tree_free_memacct'
[2020-04-01 08:36:29.410943] E [MSGID: 101021]
[graph.y:363:graphyyerror] 0-parser: syntax error: line 309 (volume
'xlator_tree_free_memacct'): "performance/write-behind"
allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume'()

sever becquerel, volume workload:

[2020-04-01 08:36:29.029953] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.317385] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.341511] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.400282] E [MSGID: 101002]
[graph.y:134:new_volume] 0-parser: new volume (workdata-write-behind)
definition in line 308 unexpected
[2020-04-01 08:36:29.400338] E [MSGID: 101098]
[xlator.c:938:xlator_tree_free_members] 0-parser: Translator tree not
found
[2020-04-01 08:36:29.400354] E [MSGID: 101098]
[xlator.c:959:xlator_tree_free_memacct] 0-parser: Translator tree not
found
pending frames:
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 11
time of crash:
2020-04-01 08:36:29
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 5.12
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x25c3f)[0x7facd212cc3f]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x323)[0x7facd2137163]
/lib/x86_64-linux-gnu/libc.so.6(+0x37840)[0x7facd1846840]
/lib/x86_64-linux-gnu/libc.so.6(+0x15c1a7)[0x7facd196b1a7]
/lib/x86_64-linux-gnu/libc.so.6(_IO_vfprintf+0x1fff)[0x7facd18609ef]
/lib/x86_64-linux-gnu/libc.so.6(__vasprintf_chk+0xc8)[0x7facd19190f8]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_msg+0x1b0)[0x7facd212dd40]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0xa1970)[0x7facd21a8970]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0xa1d86)[0x7facd21a8d86]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(glusterfs_graph_construct+0x344)[0x7facd21a9a24]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(glusterfs_volfile_reconfigure+0x30)[0x7facd2165cc0]
/usr/sbin/glusterfs(mgmt_getspec_cbk+0x2e1)[0x55a5e0a6de71]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xec60)[0x7facd20f7c60]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xefbf)[0x7facd20f7fbf]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7facd20f44e3]
/usr/lib/x86_64-linux-gnu/glusterfs/5.12/rpc-transport/socket.so(+0xbdb0)[0x7faccde83db0]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x83e7f)[0x7facd218ae7f]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3)[0x7facd1cc0fa3]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7facd19084cf]
-

server tesla: nothing related
server dirac, log for mount volume persistent on /shared/private

[2020-04-01 08:36:29.029845] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.317253] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.341371] I [glusterfsd-mgmt.c:58:mgmt_cbk_spec]
0-mgmt: Volume file changed
[2020-04-01 08:36:29.397448] E [MSGID: 101002]
[graph.y:134:new_volume] 0-parser: new volume
(persistent-write-behind) definition in line 2554 unexpected
[2020-04-01 08:36:29.397546] E [MSGID: 101098]
[xlator.c:938:xlator_tree_free_members] 0-parser: Translator tree not
found
[2020-04-01 08:36:29.397567] E [MSGID: 101098]
[xlator.c:959:xlator_tree_free_memacct] 0-parser: Translator tree not
found
[2020-04-01 08:36:29.403301] E [MSGID: 101021]
[graph.y:377:graphyyerror] 0-parser: syntax error in line 2555: "type"
(allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume')

[2020-04-01 08:36:29.407495] E [MSGID: 101021]
[graph.y:377:graphyyerror] 0-parser: syntax error in line 2555:

Re: [Gluster-users] Gluster 6.8 & debian

2020-04-01 Thread Hu Bert
Hi Sheetal,

thx for updating. I just upgraded my test gluster from 5.12 to 6.8 and
everything went fine. Services restart wasn't consistent, on one
server i had to restart the services, but that's ok i think.


Best regards
Hubert

Am Di., 31. März 2020 um 12:23 Uhr schrieb Sheetal Pamecha
:
>
> Hi,
> The packages are rebuilt with the missing dependencies and updated.
>
> Regards,
> Sheetal Pamecha
>
>
> On Mon, Mar 30, 2020 at 6:53 PM Sheetal Pamecha  wrote:
>>
>> Hi Hubert,
>>
>> This time we triggered the automation scripts for package building instead 
>> of doing it manually. It seems this is a bug in the script that all lib 
>> packages are excluded.
>> Thanks for trying and pointing it out. We are working to resolve this. I 
>> will update the package once build is complete.
>>
>> Regards,
>> Sheetal Pamecha
>>
>>
>> On Mon, Mar 30, 2020 at 5:28 PM Hu Bert  wrote:
>>>
>>> Hi Sheetal,
>>>
>>> thx so far, but some additional packages are missing: libgfapi0,
>>> libgfchangelog0, libgfrpc0, libgfxdr0, libglusterfs0
>>>
>>> The following packages have unmet dependencies:
>>>  glusterfs-common : Depends: libgfapi0 (>= 6.8) but it is not going to
>>> be installed
>>> Depends: libgfchangelog0 (>= 6.8) but it is not
>>> going to be installed
>>> Depends: libgfrpc0 (>= 6.8) but it is not going to
>>> be installed
>>> Depends: libgfxdr0 (>= 6.8) but it is not going to
>>> be installed
>>> Depends: libglusterfs0 (>= 6.8) but it is not
>>> going to be installed
>>>  glusterfs-server : Depends: libgfapi0 (>= 6.8) but it is not going to
>>> be installed
>>> Depends: libgfrpc0 (>= 6.8) but it is not going to
>>> be installed
>>> Depends: libgfxdr0 (>= 6.8) but it is not going to
>>> be installed
>>> Depends: libglusterfs0 (>= 6.8) but it is not
>>> going to be installed
>>>
>>> All the lib* packages are simply missing for version 6.8, but are
>>> there for version 6.7.
>>>
>>> https://download.gluster.org/pub/gluster/glusterfs/6/6.7/Debian/buster/amd64/apt/pool/main/g/glusterfs/
>>> vs.
>>> https://download.gluster.org/pub/gluster/glusterfs/6/6.8/Debian/buster/amd64/apt/pool/main/g/glusterfs/
>>>
>>> Can you please check?
>>>
>>>
>>> Thx,
>>> Hubert
>>>
>>> Am Mo., 30. März 2020 um 12:57 Uhr schrieb Sheetal Pamecha
>>> :
>>> >
>>> > Hi,
>>> >
>>> > I have updated the path now and now latest points to 6.8 and packages in 
>>> > place.
>>> > Regards,
>>> > Sheetal Pamecha
>>> >
>>> >
>>> > On Mon, Mar 30, 2020 at 2:23 PM Hu Bert  wrote:
>>> >>
>>> >> Hello,
>>> >>
>>> >> now the packages appeared:
>>> >>
>>> >> https://download.gluster.org/pub/gluster/glusterfs/6/6.8/Debian/buster/amd64/apt/pool/main/g/glusterfs/
>>> >>
>>> >> Dated: 2020-03-17 - so this looks good, right? Thx to the one who... ;-)
>>> >>
>>> >>
>>> >> Best Regards,
>>> >> Hubert
>>> >>
>>> >> Am Do., 26. März 2020 um 15:03 Uhr schrieb Ingo Fischer 
>>> >> :
>>> >> >
>>> >> > Hey,
>>> >> >
>>> >> > I also asked for "when 6.8 comes to LATEST" in two mails here the last
>>> >> > weeks ... I would also be very interested in the reasons.
>>> >> >
>>> >> > Ingo
>>> >> >
>>> >> > Am 26.03.20 um 07:15 schrieb Hu Bert:
>>> >> > > Hello,
>>> >> > >
>>> >> > > i just wanted to test an upgrade from version 5.12 to version 6.8, 
>>> >> > > but
>>> >> > > there are no packages for debian buster in version 6.8.
>>> >> > >
>>> >> > > https://download.gluster.org/pub/gluster/glusterfs/6/6.8/Debian/buster/amd64/apt/
>>> >> > >
>>> >> > > This directory is empty. LATEST still links to version 6.7
>>> >> > >
>>> >> > > https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/ -> 6.7
>>> >> > >
>>> >> > > 6.8 was released on 2nd of march - is there any reason why there are
>>> >> > > no packages? bugs?
>>> >> > >
>>> >> > >
>>> >> > > Best regards
>>> >> > >
>>> >> > > Hubert
>>> >> > > 
>>> >> > >
>>> >> > >
>>> >> > >
>>> >> > > Community Meeting Calendar:
>>> >> > >
>>> >> > > Schedule -
>>> >> > > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> >> > > Bridge: https://bluejeans.com/441850968
>>> >> > >
>>> >> > > Gluster-users mailing list
>>> >> > > Gluster-users@gluster.org
>>> >> > > https://lists.gluster.org/mailman/listinfo/gluster-users
>>> >> > >
>>> >> 
>>> >>
>>> >>
>>> >>
>>> >> Community Meeting Calendar:
>>> >>
>>> >> Schedule -
>>> >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> >> Bridge: https://bluejeans.com/441850968
>>> >>
>>> >> Gluster-users mailing list
>>> >> Gluster-users@gluster.org
>>> >> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users