[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #632

2018-02-15 Thread jenkins
See 


Changes:

[Sheena Artrip] rpc: Adds rpcbind6 programs to libgfrpc symbols

[Amar Tumballi] protcol/client: Insert dummy clnt-lk-version to avoid upgrade 
failure

[Xavier Hernandez] tests: bring option of per test timeout

--
[...truncated 914.71 KB...]
./tests/basic/ios-dump.t  -  10 second
./tests/basic/inode-quota-enforcing.t  -  10 second
./tests/basic/afr/stale-file-lookup.t  -  10 second
./tests/basic/afr/heal-info.t  -  10 second
./tests/basic/afr/arbiter-statfs.t  -  10 second
./tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t  -  9 second
./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  9 second
./tests/bugs/quota/bug-1292020.t  -  9 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  9 second
./tests/bugs/nfs/bug-1157223-symlink-mounting.t  -  9 second
./tests/bugs/glusterfs/bug-861015-log.t  -  9 second
./tests/bugs/glusterd/sync-post-glusterd-restart.t  -  9 second
./tests/bugs/glusterd/bug-949930.t  -  9 second
./tests/bugs/ec/bug-1179050.t  -  9 second
./tests/bugs/cli/bug-1087487.t  -  9 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  9 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  9 
second
./tests/bugs/access-control/bug-958691.t  -  9 second
./tests/basic/volume-status.t  -  9 second
./tests/basic/quota-nfs.t  -  9 second
./tests/basic/pgfid-feat.t  -  9 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  9 second
./tests/basic/fop-sampling.t  -  9 second
./tests/basic/ec/ec-anonymous-fd.t  -  9 second
./tests/basic/cdc.t  -  9 second
./tests/basic/afr/compounded-write-txns.t  -  9 second
./tests/gfid2path/get-gfid-to-path.t  -  8 second
./tests/features/lock-migration/lkmigration-set-option.t  -  8 second
./tests/bugs/upcall/bug-1458127.t  -  8 second
./tests/bugs/upcall/bug-1227204.t  -  8 second
./tests/bugs/snapshot/bug-1260848.t  -  8 second
./tests/bugs/shard/bug-1488546.t  -  8 second
./tests/bugs/shard/bug-1258334.t  -  8 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  8 second
./tests/bugs/quick-read/bz1523599/bz1523599.t  -  8 second
./tests/bugs/posix/bug-1360679.t  -  8 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  8 second
./tests/bugs/md-cache/bug-1211863.t  -  8 second
./tests/bugs/glusterfs-server/bug-904300.t  -  8 second
./tests/bugs/glusterd/bug-1242875-do-not-pass-volinfo-quota.t  -  8 second
./tests/bugs/fuse/bug-963678.t  -  8 second
./tests/bugs/distribute/bug-882278.t  -  8 second
./tests/bugs/distribute/bug-1368012.t  -  8 second
./tests/bugs/distribute/bug-1088231.t  -  8 second
./tests/bugs/core/bug-949242.t  -  8 second
./tests/bugs/changelog/bug-1208470.t  -  8 second
./tests/bugs/bug-1371806_2.t  -  8 second
./tests/bitrot/br-stub.t  -  8 second
./tests/basic/ec/ec-read-policy.t  -  8 second
./tests/basic/afr/gfid-mismatch.t  -  8 second
./tests/basic/afr/gfid-heal.t  -  8 second
./tests/gfid2path/block-mount-access.t  -  7 second
./tests/features/ssl-authz.t  -  7 second
./tests/bugs/snapshot/bug-1064768.t  -  7 second
./tests/bugs/replicate/bug-966018.t  -  7 second
./tests/bugs/replicate/bug-767585-gfid.t  -  7 second
./tests/bugs/replicate/bug-1365455.t  -  7 second
./tests/bugs/replicate/bug-1101647.t  -  7 second
./tests/bugs/quota/bug-1243798.t  -  7 second
./tests/bugs/quota/bug-1104692.t  -  7 second
./tests/bugs/io-cache/bug-read-hang.t  -  7 second
./tests/bugs/glusterfs/bug-893338.t  -  7 second
./tests/bugs/fuse/bug-985074.t  -  7 second
./tests/bugs/ec/bug-1227869.t  -  7 second
./tests/bugs/distribute/bug-884597.t  -  7 second
./tests/bugs/core/bug-986429.t  -  7 second
./tests/bugs/core/bug-834465.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  7 second
./tests/bitrot/bug-1221914.t  -  7 second
./tests/basic/hardlink-limit.t  -  7 second
./tests/basic/gfapi/mandatory-lock-optimal.t  -  7 second
./tests/basic/ec/nfs.t  -  7 second
./tests/basic/ec/ec-internal-xattrs.t  -  7 second
./tests/basic/ec/ec-fallocate.t  -  7 second
./tests/basic/ec/dht-rename.t  -  7 second
./tests/basic/afr/arbiter-remove-brick.t  -  7 second
./tests/bugs/upcall/bug-upcall-stat.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/unclassified/bug-1034085.t  -  6 second
./tests/bugs/snapshot/bug-1178079.t  -  6 second
./tests/bugs/shard/bug-1342298.t  -  6 second
./tests/bugs/shard/bug-1272986.t  -  6 second
./tests/bugs/shard/bug-1259651.t  -  6 second
./tests/bugs/shard/bug-1256580.t  -  6 second
./tests/bugs/quota/bug-1287996.t  -  6 second
./tests/bugs/posix/bug-765380.t  -  6 second
./tests/bugs/nfs/bug-915280.t  -  6 second
./tests/bugs/nfs/bug-877885.t  -  6 second
./tests/bugs/md-cache/bug-1211863_unlink.t  -  6 second
./tests/bugs/md-cache/afr-stale-read.t  -  6 

[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #3856

2018-02-15 Thread jenkins
See 


--
[...truncated 256.89 KB...]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
not ok 32 Got "" instead of "4", LINENUM:73
FAILED COMMAND: 4 ec_child_up_count patchy 0
ok 33, LINENUM:76
ok 34, LINENUM:79
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
not ok 35 Got "" instead of "6", LINENUM:80
FAILED COMMAND: 6 ec_child_up_count patchy 0
not ok 36 Got "" instead of "^0$", LINENUM:83
FAILED COMMAND: ^0$ get_pending_heal_count patchy
cat: 
/var/run/gluster/vols/patchy/builder106.cloud.gluster.org-d-backends-patchy3.pid:
 No such file or directory
Usage: gf_attach uds_path volfile_path (to attach)
   gf_attach -d uds_path brick_path (to detach)
ok 37, LINENUM:86
cat: 
/var/run/gluster/vols/patchy/builder106.cloud.gluster.org-d-backends-patchy4.pid:
 No such file or directory
Usage: gf_attach uds_path volfile_path (to attach)
   gf_attach -d uds_path brick_path (to detach)
ok 38, LINENUM:87
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is a directory
rm: cannot remove ‘/var/run/gluster/’: Is a directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill 
-l [sigspec]
sed: read error on /var/run/gluster/: Is 

[Gluster-Maintainers] Build failed in Jenkins: regression-test-burn-in #3855

2018-02-15 Thread jenkins
See 


Changes:

[Xavier Hernandez] tests: bring option of per test timeout

--
[...truncated 478.56 KB...]
not ok 43 , LINENUM:50
FAILED COMMAND: gluster --mode=script --wignore volume create patchy-vol09 
replica 2 builder106.cloud.gluster.org:/d/backends/vol09/brick0 
builder106.cloud.gluster.org:/d/backends/vol09/brick1 
builder106.cloud.gluster.org:/d/backends/vol09/brick2 
builder106.cloud.gluster.org:/d/backends/vol09/brick3 
builder106.cloud.gluster.org:/d/backends/vol09/brick4 
builder106.cloud.gluster.org:/d/backends/vol09/brick5
volume start: patchy-vol09: failed: Volume patchy-vol09 does not exist
not ok 44 , LINENUM:51
FAILED COMMAND: gluster --mode=script --wignore volume start patchy-vol09
not ok 45 Got "0" instead of "7", LINENUM:53
FAILED COMMAND: 7 count_up_bricks patchy-vol09
not ok 46 , LINENUM:56
FAILED COMMAND: _GFS --attribute-timeout=0 --entry-timeout=0 -s 
builder106.cloud.gluster.org --volfile-id=patchy-vol09 /mnt/glusterfs/vol09
ok 47, LINENUM:83
ok 48, LINENUM:50
not ok 49 , LINENUM:51
FAILED COMMAND: gluster --mode=script --wignore volume start patchy-vol10
not ok 50 Got "0" instead of "7", LINENUM:53
FAILED COMMAND: 7 count_up_bricks patchy-vol10
not ok 51 , LINENUM:56
FAILED COMMAND: _GFS --attribute-timeout=0 --entry-timeout=0 -s 
builder106.cloud.gluster.org --volfile-id=patchy-vol10 /mnt/glusterfs/vol10
ok 52, LINENUM:83
not ok 53 , LINENUM:50
FAILED COMMAND: gluster --mode=script --wignore volume create patchy-vol11 
replica 2 builder106.cloud.gluster.org:/d/backends/vol11/brick0 
builder106.cloud.gluster.org:/d/backends/vol11/brick1 
builder106.cloud.gluster.org:/d/backends/vol11/brick2 
builder106.cloud.gluster.org:/d/backends/vol11/brick3 
builder106.cloud.gluster.org:/d/backends/vol11/brick4 
builder106.cloud.gluster.org:/d/backends/vol11/brick5
not ok 54 , LINENUM:51
FAILED COMMAND: gluster --mode=script --wignore volume start patchy-vol11
not ok 55 Got "0" instead of "7", LINENUM:53
FAILED COMMAND: 7 count_up_bricks patchy-vol11
not ok 56 , LINENUM:56
FAILED COMMAND: _GFS --attribute-timeout=0 --entry-timeout=0 -s 
builder106.cloud.gluster.org --volfile-id=patchy-vol11 /mnt/glusterfs/vol11
ok 57, LINENUM:83
ok 58, LINENUM:50
ok 59, LINENUM:51
ok 60, LINENUM:53
ok 61, LINENUM:56
ok 62, LINENUM:83
ok 63, LINENUM:50
ok 64, LINENUM:51
ok 65, LINENUM:53
ok 66, LINENUM:56
ok 67, LINENUM:83
ok 68, LINENUM:50
ok 69, LINENUM:51
ok 70, LINENUM:53
ok 71, LINENUM:56
ok 72, LINENUM:83
ok 73, LINENUM:50
ok 74, LINENUM:51
ok 75, LINENUM:53
ok 76, LINENUM:56
ok 77, LINENUM:83
ok 78, LINENUM:50
ok 79, LINENUM:51
ok 80, LINENUM:53
ok 81, LINENUM:56
ok 82, LINENUM:83
ok 83, LINENUM:50
ok 84, LINENUM:51
ok 85, LINENUM:53
ok 86, LINENUM:56
ok 87, LINENUM:83
ok 88, LINENUM:50
ok 89, LINENUM:51
ok 90, LINENUM:53
ok 91, LINENUM:56
ok 92, LINENUM:83
ok 93, LINENUM:50
ok 94, LINENUM:51
ok 95, LINENUM:53
ok 96, LINENUM:56
ok 97, LINENUM:83
ok 98, LINENUM:50
ok 99, LINENUM:51
ok 100, LINENUM:53
ok 101, LINENUM:56
ok 102, LINENUM:83
ok 103, LINENUM:87
ok 104, LINENUM:89
ok 105, LINENUM:95
rm: cannot remove ‘/mnt/glusterfs/0’: Is a directory
Aborting.

/mnt/nfs/1 could not be deleted, here are the left over items
drwxr-xr-x. 2 root root 4096 Feb 15 14:26 /mnt/glusterfs/0

Please correct the problem and try again.

Dubious, test returned 1 (wstat 256, 0x100)
Failed 15/105 subtests 

Test Summary Report
---
./tests/bugs/core/bug-1432542-mpx-restart-crash.t (Wstat: 256 Tests: 105 
Failed: 15)
  Failed tests:  30-31, 40-41, 43-46, 49-51, 53-56
  Non-zero exit status: 1
Files=1, Tests=105, 294 wallclock secs ( 0.06 usr  0.01 sys + 15.12 cusr  8.86 
csys = 24.05 CPU)
Result: FAIL
./tests/bugs/core/bug-1432542-mpx-restart-crash.t: bad status 1

   *
   *   REGRESSION FAILED   *
   * Retrying failed tests in case *
   * we got some spurious failures *
   *

./tests/bugs/core/bug-1432542-mpx-restart-crash.t .. 
1..105
ok 1, LINENUM:74
ok 2, LINENUM:75
ok 3, LINENUM:50
ok 4, LINENUM:51
ok 5, LINENUM:53
ok 6, LINENUM:56
ok 7, LINENUM:83
ok 8, LINENUM:50
ok 9, LINENUM:51
ok 10, LINENUM:53
ok 11, LINENUM:56
ok 12, LINENUM:83
ok 13, LINENUM:50
ok 14, LINENUM:51
ok 15, LINENUM:53
ok 16, LINENUM:56
ok 17, LINENUM:83
ok 18, LINENUM:50
ok 19, LINENUM:51
ok 20, LINENUM:53
ok 21, LINENUM:56
ok 22, LINENUM:83
ok 23, LINENUM:50
ok 24, LINENUM:51
ok 25, LINENUM:53
ok 26, LINENUM:56
ok 27, LINENUM:83
ok 28, LINENUM:50
ok 29, LINENUM:51
ok 30, LINENUM:53
ok 31, LINENUM:56
ok 32, LINENUM:83
ok 33, LINENUM:50
ok 34, LINENUM:51
ok 35, LINENUM:53
ok 36, LINENUM:56
ok 37, LINENUM:83
ok 38, LINENUM:50
ok 39, LINENUM:51
ok 40, LINENUM:53
ok 41, LINENUM:56
ok 42, LINENUM:83
ok 43, LINENUM:50
ok 44, LINENUM:51
ok 45, LINENUM:53
ok 46, LINENUM:56
ok 47, LINENUM:83
ok 48, LINENUM:50
ok 49, 

Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-3.12.6 released

2018-02-15 Thread Javier Romero
Your welcome.

Regards,
Javier Romero
E-mail: xavi...@gmail.com
Skype: xavinux



2018-02-15 9:22 GMT-03:00 Kaleb S. KEITHLEY :
> 3.12.6 rpms have been tagged for release.
>
> thanks for testing Javier.
>
>
> On 02/14/2018 09:40 AM, Javier Romero wrote:
>> Hi Kaleb,
>>
>> I've run a yum install on CentOS 7.4.1708 (Core) with kernel
>> 3.10.0-693.17.1.el7.x86_64 and installation was successfully.
>>
>> # yum install centos-release-gluster312
>> # yum --enablerepo=centos-gluster312-test install glusterfs-server
>> # systemctl start glusterd.service
>> # systemctl status glusterd.service
>>
>> ● glusterd.service - GlusterFS, a clustered file-system server
>>Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
>> vendor preset: disabled)
>>Active: active (running) since Wed 2018-02-14 11:36:47 -03; 5s ago
>>   Process: 23036 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
>> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited,
>> status=0/SUCCESS)
>>  Main PID: 23037 (glusterd)
>>CGroup: /system.slice/glusterd.service
>>└─23037 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level 
>> INFO
>>
>> Feb 14 11:36:47 centos-7 systemd[1]: Starting GlusterFS, a clustered
>> file-system server...
>> Feb 14 11:36:47 centos-7 systemd[1]: Started GlusterFS, a clustered
>> file-system server.
>>
>> Regards,
>> Javier Romero
>> E-mail: xavi...@gmail.com
>> Skype: xavinux
>>
>>
>>
>> 2018-02-14 11:14 GMT-03:00 Kaleb S. KEITHLEY :
>>> On 02/14/2018 09:01 AM, Kaleb S. KEITHLEY wrote:
 On 02/13/2018 01:18 PM, jenk...@build.gluster.org wrote:
> SRC: 
> https://build.gluster.org/job/release-new/41/artifact/glusterfs-3.12.6.tar.gz
> HASH: 
> https://build.gluster.org/job/release-new/41/artifact/glusterfs-3.12.6.sha512sum
>
> This release is made off jenkins-release-41
>

 Packages for:

 * Fedora 27 are in the Fedora Updates or Updates-Testing repo. Use `dnf`
 to install. Fedora 26 is on download.gluster.org at [1].

 * Debian Jessie/8, Stretch/9, and Buster/10(Sid) are on
 download.gluster.org at [1]

 * Ubuntu Xenial/16.04, Artful/17.10, and Bionic/18.04 are on Launchpad
 at [2]

 * SuSE SLES12SP3, Leap42.3, and Tumbleweed are on OpenSuSE Build Service
 at [3].

 The .../LTM-3.12 -> .../3.12/3.12.6 and .../3.12/LATEST ->
 .../3.12/3.12.6 symlinks have been updated.

>>>
>>> Packages for the CentOS Storage SIG are available. Before tagging them
>>> for release they should be tested:
>>>
>>>   # yum install centos-release-gluster312
>>>   # yum --enablerepo=centos-gluster312-test install glusterfs-server
>>>
>>> Please let us know if the update works well for you. Once we have
>>> feedback I'll tag them for release to the CentOS mirror network.
>>>
 [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/
 [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
 [3] https://build.opensuse.org/project/subprojects/home:glusterfs

>>>
>>> --
>>>
>>> Kaleb
>>> ___
>>> packaging mailing list
>>> packag...@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/packaging
>> ___
>> packaging mailing list
>> packag...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/packaging
>>
>
___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers


Re: [Gluster-Maintainers] [gluster-packaging] glusterfs-3.12.6 released

2018-02-15 Thread Kaleb S. KEITHLEY
3.12.6 rpms have been tagged for release.

thanks for testing Javier.


On 02/14/2018 09:40 AM, Javier Romero wrote:
> Hi Kaleb,
> 
> I've run a yum install on CentOS 7.4.1708 (Core) with kernel
> 3.10.0-693.17.1.el7.x86_64 and installation was successfully.
> 
> # yum install centos-release-gluster312
> # yum --enablerepo=centos-gluster312-test install glusterfs-server
> # systemctl start glusterd.service
> # systemctl status glusterd.service
> 
> ● glusterd.service - GlusterFS, a clustered file-system server
>Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
> vendor preset: disabled)
>Active: active (running) since Wed 2018-02-14 11:36:47 -03; 5s ago
>   Process: 23036 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
> --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited,
> status=0/SUCCESS)
>  Main PID: 23037 (glusterd)
>CGroup: /system.slice/glusterd.service
>└─23037 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level 
> INFO
> 
> Feb 14 11:36:47 centos-7 systemd[1]: Starting GlusterFS, a clustered
> file-system server...
> Feb 14 11:36:47 centos-7 systemd[1]: Started GlusterFS, a clustered
> file-system server.
> 
> Regards,
> Javier Romero
> E-mail: xavi...@gmail.com
> Skype: xavinux
> 
> 
> 
> 2018-02-14 11:14 GMT-03:00 Kaleb S. KEITHLEY :
>> On 02/14/2018 09:01 AM, Kaleb S. KEITHLEY wrote:
>>> On 02/13/2018 01:18 PM, jenk...@build.gluster.org wrote:
 SRC: 
 https://build.gluster.org/job/release-new/41/artifact/glusterfs-3.12.6.tar.gz
 HASH: 
 https://build.gluster.org/job/release-new/41/artifact/glusterfs-3.12.6.sha512sum

 This release is made off jenkins-release-41

>>>
>>> Packages for:
>>>
>>> * Fedora 27 are in the Fedora Updates or Updates-Testing repo. Use `dnf`
>>> to install. Fedora 26 is on download.gluster.org at [1].
>>>
>>> * Debian Jessie/8, Stretch/9, and Buster/10(Sid) are on
>>> download.gluster.org at [1]
>>>
>>> * Ubuntu Xenial/16.04, Artful/17.10, and Bionic/18.04 are on Launchpad
>>> at [2]
>>>
>>> * SuSE SLES12SP3, Leap42.3, and Tumbleweed are on OpenSuSE Build Service
>>> at [3].
>>>
>>> The .../LTM-3.12 -> .../3.12/3.12.6 and .../3.12/LATEST ->
>>> .../3.12/3.12.6 symlinks have been updated.
>>>
>>
>> Packages for the CentOS Storage SIG are available. Before tagging them
>> for release they should be tested:
>>
>>   # yum install centos-release-gluster312
>>   # yum --enablerepo=centos-gluster312-test install glusterfs-server
>>
>> Please let us know if the update works well for you. Once we have
>> feedback I'll tag them for release to the CentOS mirror network.
>>
>>> [1] https://download.gluster.org/pub/gluster/glusterfs/3.12/
>>> [2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
>>> [3] https://build.opensuse.org/project/subprojects/home:glusterfs
>>>
>>
>> --
>>
>> Kaleb
>> ___
>> packaging mailing list
>> packag...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/packaging
> ___
> packaging mailing list
> packag...@gluster.org
> http://lists.gluster.org/mailman/listinfo/packaging
> 

___
maintainers mailing list
maintainers@gluster.org
http://lists.gluster.org/mailman/listinfo/maintainers