On 03/06/2018 10:25 AM, jenk...@build.gluster.org wrote:
> SRC: 
> https://build.gluster.org/job/release-new/46/artifact/glusterfs-4.0.0.tar.gz
> HASH: 
> https://build.gluster.org/job/release-new/46/artifact/glusterfs-4.0.0.sha512sum
> 
> This release is made off jenkins-release-46
> 

glusterfs-4.0.0 packages for:

* Fedora 26, 27, and 28 are on download.gluster.org at [1]. Fedora 29
are in the Fedora Rawhide repo. Use `dnf` to install.

* Debian Stretch/9 and Buster/10(Sid) are on download.gluster.org at [1]
(arm64 packages coming later.)

* Xenial/16.04, Artful/17.10, and Bionic/18.04 are on Launchpad at [2]
shortly.

* SuSE SLES12SP3, Leap42.3, and Tumbleweed are on OpenSuSE Build Service
at [3].

* RHEL/CentOS el7 and el6 (el6 client-side only) in CentOS Storage SIG
at [4].


glusterd2-4.0.0 packages for:

* Fedora 26, 27, 28, and 29 are on download.gluster.org at [5].
Eventually rpms will be available in Fedora (29 probably) pending
completion of package review.

* RHEL/CentOS el7 in CentOS Storage SIG at [4].

* SuSE SLES12SP3, Leap42.3, and Tumbleweed are on OpenSuSE Build Service
at [3]. glusterd2 rpms are in the same repos with the matching glusterfs
rpms.

All the LATEST and STM-4.0 symlinks have been created or updated to
point to the 4.0.0 release.

Please test the CentOS packages and give feedback so that packages can
be tagged for release.

And of course the Debian and Ubuntu glusterfs packages are usable
without glusterd2, so go ahead and start using them now. Anyone with
Debian golang packaging experience who can help package GlusterD2 would
be appreciated.

[1] https://download.gluster.org/pub/gluster/glusterfs/4.0
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-4.0
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] https://buildlogs.centos.org/centos/7/storage/$arch/gluster-4.0
[5] https://download.gluster.org/pub/gluster/glusterd2/4.0

_______________________________________________
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers

Reply via email to