Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released

2014-05-28 Thread Franco Broi

BTW, if I update gluster to 3.5.1-0.1.beta1.el6 using yum,
glusterfs-libs doesn't install properly.

  Cleanup: glusterfs-libs-3.5qa2-0.510.git5811f5c.el6.x86_64
   11/11 
Non-fatal POSTUN scriptlet failure in rpm package glusterfs-libs

# gluster --version
gluster: error while loading shared libraries: libglusterfs.so.0: cannot open 
shared object file: No such file or directory

Doing yum reinstall glusterfs-libs fixes it

# gluster --version
glusterfs 3.5.1beta built on May 26 2014 18:38:25


On Thu, 2014-05-29 at 10:34 +0800, Franco Broi wrote: 
> 
> OK. I've just unmounted the data2 volume from a machine called tape1,
> and now try to remount - it's hanging.
> 
> /bin/sh /sbin/mount.glusterfs nas5-10g:/data2 /data2 -o 
> rw,log-level=info,log-file=/var/log/glusterfs/glusterfs_data2.log
> 
> on the server 
> # gluster vol status
> Status of volume: data2
> Gluster process   PortOnline  
> Pid
> --
> Brick nas5-10g:/data17/gvol   49152   Y   6553
> Brick nas5-10g:/data18/gvol   49153   Y   6564
> Brick nas5-10g:/data19/gvol   49154   Y   6575
> Brick nas5-10g:/data20/gvol   49155   Y   6586
> Brick nas6-10g:/data21/gvol   49160   Y   20608
> Brick nas6-10g:/data22/gvol   49161   Y   20613
> Brick nas6-10g:/data23/gvol   49162   Y   20614
> Brick nas6-10g:/data24/gvol   49163   Y   20621
>  
> Task Status of Volume data2
> --
> There are no active volume tasks
> 
> Sending the sigusr1 killed the mount processes and I don't see any state
> dumps. What path should they be in? I'm running Gluster installed via
> rpm and I don't see a /var/run/gluster directory.
> 
> On Wed, 2014-05-28 at 22:09 -0400, Krishnan Parthasarathi wrote: 
> > Franco,
> > 
> > When your clients perceive a hang, could you check the status of the bricks 
> > by running,
> > # gluster volume status VOLNAME  (run this on one of the 'server' machines 
> > in the cluster.)
> > 
> > Could you also provide the statedump of the client(s),
> > by issuing the following command.
> > 
> > # kill -SIGUSR1 pid-of-mount-process (run this on the 'client' machine.)
> > 
> > This would dump the state information of the client, like the file 
> > operations in progress,
> > memory consumed etc, onto a file under $INSTALL_PREFIX/var/run/gluster. 
> > Please attach this
> > file in your response.
> > 
> > thanks,
> > Krish
> > 
> > - Original Message -
> > > Hi
> > > 
> > > My clients are running 3.4.1, when I try to mount from lots of machine
> > > simultaneously, some of the mounts hang. Stopping and starting the
> > > volume clears the hung mounts.
> > > 
> > > Errors in the client logs
> > > 
> > > [2014-05-28 01:47:15.930866] E
> > > [client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: 
> > > failed
> > > to get the port number for remote subvolume. Please run 'gluster volume
> > > status' on server to see if brick process is running.
> > > 
> > > Let me know if you want more information.
> > > 
> > > Cheers,
> > > 
> > > On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote:
> > > > On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > > > > 
> > > > > SRC:
> > > > > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
> > > > 
> > > > This beta release is intended to verify the changes that should resolve
> > > > the bugs listed below. We appreciate tests done by anyone. Please leave
> > > > a comment in the respective bugreport with a short description of the
> > > > success or failure. Visiting one of the bugreports is as easy as opening
> > > > the bugzilla.redhat.com/$BUG URL, for the first in the list, this
> > > > results in http://bugzilla.redhat.com/765202 .
> > > > 
> > > > Bugs expected to be fixed (31 in total since 3.5.0):
> > > > 
> > > >  #765202 - lgetxattr called with invalid keys on the bricks
> > > >  #833586 - inodelk hang from marker_rename_release_newp_lock
> > > >  #859581 - self-heal process can sometimes create directories instead of
> > > >  symlinks for the root gfid file in .glusterfs
> > > >  #986429 - Backupvolfile server option should work internal to GlusterFS
> > > >  framework
> > > > #1039544 - [FEAT] "gluster volume heal info" should list the entries 
> > > > that
> > > > actually required to be healed.
> > > > #1046624 - Unable to heal symbolic Links
> > > > #1046853 - AFR : For every file self-heal there are warning messages
> > > > reported in glustershd.log file
> > > > #1063190 - [RHEV-RHS] Volume was not accessible after server side quor

Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released

2014-05-28 Thread Franco Broi

Also forgot to mention that it's impossible to change any volume
settings while 3.4 clients are attached but I can unmount them all,
change the setting and them mount them all again.

gluster vol set data2 nfs.disable off
volume set: failed: Staging failed on nas5-10g. Error: One or more
connected clients cannot support the feature being set. These clients
need to be upgraded or disconnected before running this command again


On Thu, 2014-05-29 at 10:34 +0800, Franco Broi wrote: 
> 
> OK. I've just unmounted the data2 volume from a machine called tape1,
> and now try to remount - it's hanging.
> 
> /bin/sh /sbin/mount.glusterfs nas5-10g:/data2 /data2 -o 
> rw,log-level=info,log-file=/var/log/glusterfs/glusterfs_data2.log
> 
> on the server 
> # gluster vol status
> Status of volume: data2
> Gluster process   PortOnline  
> Pid
> --
> Brick nas5-10g:/data17/gvol   49152   Y   6553
> Brick nas5-10g:/data18/gvol   49153   Y   6564
> Brick nas5-10g:/data19/gvol   49154   Y   6575
> Brick nas5-10g:/data20/gvol   49155   Y   6586
> Brick nas6-10g:/data21/gvol   49160   Y   20608
> Brick nas6-10g:/data22/gvol   49161   Y   20613
> Brick nas6-10g:/data23/gvol   49162   Y   20614
> Brick nas6-10g:/data24/gvol   49163   Y   20621
>  
> Task Status of Volume data2
> --
> There are no active volume tasks
> 
> Sending the sigusr1 killed the mount processes and I don't see any state
> dumps. What path should they be in? I'm running Gluster installed via
> rpm and I don't see a /var/run/gluster directory.
> 
> On Wed, 2014-05-28 at 22:09 -0400, Krishnan Parthasarathi wrote: 
> > Franco,
> > 
> > When your clients perceive a hang, could you check the status of the bricks 
> > by running,
> > # gluster volume status VOLNAME  (run this on one of the 'server' machines 
> > in the cluster.)
> > 
> > Could you also provide the statedump of the client(s),
> > by issuing the following command.
> > 
> > # kill -SIGUSR1 pid-of-mount-process (run this on the 'client' machine.)
> > 
> > This would dump the state information of the client, like the file 
> > operations in progress,
> > memory consumed etc, onto a file under $INSTALL_PREFIX/var/run/gluster. 
> > Please attach this
> > file in your response.
> > 
> > thanks,
> > Krish
> > 
> > - Original Message -
> > > Hi
> > > 
> > > My clients are running 3.4.1, when I try to mount from lots of machine
> > > simultaneously, some of the mounts hang. Stopping and starting the
> > > volume clears the hung mounts.
> > > 
> > > Errors in the client logs
> > > 
> > > [2014-05-28 01:47:15.930866] E
> > > [client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: 
> > > failed
> > > to get the port number for remote subvolume. Please run 'gluster volume
> > > status' on server to see if brick process is running.
> > > 
> > > Let me know if you want more information.
> > > 
> > > Cheers,
> > > 
> > > On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote:
> > > > On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > > > > 
> > > > > SRC:
> > > > > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
> > > > 
> > > > This beta release is intended to verify the changes that should resolve
> > > > the bugs listed below. We appreciate tests done by anyone. Please leave
> > > > a comment in the respective bugreport with a short description of the
> > > > success or failure. Visiting one of the bugreports is as easy as opening
> > > > the bugzilla.redhat.com/$BUG URL, for the first in the list, this
> > > > results in http://bugzilla.redhat.com/765202 .
> > > > 
> > > > Bugs expected to be fixed (31 in total since 3.5.0):
> > > > 
> > > >  #765202 - lgetxattr called with invalid keys on the bricks
> > > >  #833586 - inodelk hang from marker_rename_release_newp_lock
> > > >  #859581 - self-heal process can sometimes create directories instead of
> > > >  symlinks for the root gfid file in .glusterfs
> > > >  #986429 - Backupvolfile server option should work internal to GlusterFS
> > > >  framework
> > > > #1039544 - [FEAT] "gluster volume heal info" should list the entries 
> > > > that
> > > > actually required to be healed.
> > > > #1046624 - Unable to heal symbolic Links
> > > > #1046853 - AFR : For every file self-heal there are warning messages
> > > > reported in glustershd.log file
> > > > #1063190 - [RHEV-RHS] Volume was not accessible after server side quorum
> > > > was met
> > > > #1064096 - The old Python Translator code (not Glupy) should be removed
> > > > #1066996 - Using sanlock on a gluster mount with re

Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released

2014-05-28 Thread Franco Broi


OK. I've just unmounted the data2 volume from a machine called tape1,
and now try to remount - it's hanging.

/bin/sh /sbin/mount.glusterfs nas5-10g:/data2 /data2 -o 
rw,log-level=info,log-file=/var/log/glusterfs/glusterfs_data2.log

on the server 
# gluster vol status
Status of volume: data2
Gluster process PortOnline  Pid
--
Brick nas5-10g:/data17/gvol 49152   Y   6553
Brick nas5-10g:/data18/gvol 49153   Y   6564
Brick nas5-10g:/data19/gvol 49154   Y   6575
Brick nas5-10g:/data20/gvol 49155   Y   6586
Brick nas6-10g:/data21/gvol 49160   Y   20608
Brick nas6-10g:/data22/gvol 49161   Y   20613
Brick nas6-10g:/data23/gvol 49162   Y   20614
Brick nas6-10g:/data24/gvol 49163   Y   20621
 
Task Status of Volume data2
--
There are no active volume tasks

Sending the sigusr1 killed the mount processes and I don't see any state
dumps. What path should they be in? I'm running Gluster installed via
rpm and I don't see a /var/run/gluster directory.

On Wed, 2014-05-28 at 22:09 -0400, Krishnan Parthasarathi wrote: 
> Franco,
> 
> When your clients perceive a hang, could you check the status of the bricks 
> by running,
> # gluster volume status VOLNAME  (run this on one of the 'server' machines in 
> the cluster.)
> 
> Could you also provide the statedump of the client(s),
> by issuing the following command.
> 
> # kill -SIGUSR1 pid-of-mount-process (run this on the 'client' machine.)
> 
> This would dump the state information of the client, like the file operations 
> in progress,
> memory consumed etc, onto a file under $INSTALL_PREFIX/var/run/gluster. 
> Please attach this
> file in your response.
> 
> thanks,
> Krish
> 
> - Original Message -
> > Hi
> > 
> > My clients are running 3.4.1, when I try to mount from lots of machine
> > simultaneously, some of the mounts hang. Stopping and starting the
> > volume clears the hung mounts.
> > 
> > Errors in the client logs
> > 
> > [2014-05-28 01:47:15.930866] E
> > [client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: failed
> > to get the port number for remote subvolume. Please run 'gluster volume
> > status' on server to see if brick process is running.
> > 
> > Let me know if you want more information.
> > 
> > Cheers,
> > 
> > On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote:
> > > On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > > > 
> > > > SRC:
> > > > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
> > > 
> > > This beta release is intended to verify the changes that should resolve
> > > the bugs listed below. We appreciate tests done by anyone. Please leave
> > > a comment in the respective bugreport with a short description of the
> > > success or failure. Visiting one of the bugreports is as easy as opening
> > > the bugzilla.redhat.com/$BUG URL, for the first in the list, this
> > > results in http://bugzilla.redhat.com/765202 .
> > > 
> > > Bugs expected to be fixed (31 in total since 3.5.0):
> > > 
> > >  #765202 - lgetxattr called with invalid keys on the bricks
> > >  #833586 - inodelk hang from marker_rename_release_newp_lock
> > >  #859581 - self-heal process can sometimes create directories instead of
> > >  symlinks for the root gfid file in .glusterfs
> > >  #986429 - Backupvolfile server option should work internal to GlusterFS
> > >  framework
> > > #1039544 - [FEAT] "gluster volume heal info" should list the entries that
> > > actually required to be healed.
> > > #1046624 - Unable to heal symbolic Links
> > > #1046853 - AFR : For every file self-heal there are warning messages
> > > reported in glustershd.log file
> > > #1063190 - [RHEV-RHS] Volume was not accessible after server side quorum
> > > was met
> > > #1064096 - The old Python Translator code (not Glupy) should be removed
> > > #1066996 - Using sanlock on a gluster mount with replica 3 (quorum-type
> > > auto) leads to a split-brain
> > > #1071191 - [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created
> > > with open(), seek(), write()
> > > #1078061 - Need ability to heal mismatching user extended attributes
> > > without any changelogs
> > > #1078365 - New xlators are linked as versioned .so files, creating
> > > .so.0.0.0
> > > #1086748 - Add documentation for the Feature: AFR CLI enhancements
> > > #1086750 - Add documentation for the Feature: File Snapshots in GlusterFS
> > > #1086752 - Add documentation for the Feature: On-Wire
> > > Compression/Decompression
> > > #1086756 - Add documentation for the Feature: zerofill API for GlusterFS
> > > #

Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released

2014-05-28 Thread Krishnan Parthasarathi
Franco,

When your clients perceive a hang, could you check the status of the bricks by 
running,
# gluster volume status VOLNAME  (run this on one of the 'server' machines in 
the cluster.)

Could you also provide the statedump of the client(s),
by issuing the following command.

# kill -SIGUSR1 pid-of-mount-process (run this on the 'client' machine.)

This would dump the state information of the client, like the file operations 
in progress,
memory consumed etc, onto a file under $INSTALL_PREFIX/var/run/gluster. Please 
attach this
file in your response.

thanks,
Krish

- Original Message -
> Hi
> 
> My clients are running 3.4.1, when I try to mount from lots of machine
> simultaneously, some of the mounts hang. Stopping and starting the
> volume clears the hung mounts.
> 
> Errors in the client logs
> 
> [2014-05-28 01:47:15.930866] E
> [client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: failed
> to get the port number for remote subvolume. Please run 'gluster volume
> status' on server to see if brick process is running.
> 
> Let me know if you want more information.
> 
> Cheers,
> 
> On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote:
> > On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > > 
> > > SRC:
> > > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
> > 
> > This beta release is intended to verify the changes that should resolve
> > the bugs listed below. We appreciate tests done by anyone. Please leave
> > a comment in the respective bugreport with a short description of the
> > success or failure. Visiting one of the bugreports is as easy as opening
> > the bugzilla.redhat.com/$BUG URL, for the first in the list, this
> > results in http://bugzilla.redhat.com/765202 .
> > 
> > Bugs expected to be fixed (31 in total since 3.5.0):
> > 
> >  #765202 - lgetxattr called with invalid keys on the bricks
> >  #833586 - inodelk hang from marker_rename_release_newp_lock
> >  #859581 - self-heal process can sometimes create directories instead of
> >  symlinks for the root gfid file in .glusterfs
> >  #986429 - Backupvolfile server option should work internal to GlusterFS
> >  framework
> > #1039544 - [FEAT] "gluster volume heal info" should list the entries that
> > actually required to be healed.
> > #1046624 - Unable to heal symbolic Links
> > #1046853 - AFR : For every file self-heal there are warning messages
> > reported in glustershd.log file
> > #1063190 - [RHEV-RHS] Volume was not accessible after server side quorum
> > was met
> > #1064096 - The old Python Translator code (not Glupy) should be removed
> > #1066996 - Using sanlock on a gluster mount with replica 3 (quorum-type
> > auto) leads to a split-brain
> > #1071191 - [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created
> > with open(), seek(), write()
> > #1078061 - Need ability to heal mismatching user extended attributes
> > without any changelogs
> > #1078365 - New xlators are linked as versioned .so files, creating
> > .so.0.0.0
> > #1086748 - Add documentation for the Feature: AFR CLI enhancements
> > #1086750 - Add documentation for the Feature: File Snapshots in GlusterFS
> > #1086752 - Add documentation for the Feature: On-Wire
> > Compression/Decompression
> > #1086756 - Add documentation for the Feature: zerofill API for GlusterFS
> > #1086758 - Add documentation for the Feature: Changelog based parallel
> > geo-replication
> > #1086760 - Add documentation for the Feature: Write Once Read Many (WORM)
> > volume
> > #1086762 - Add documentation for the Feature: BD Xlator - Block Device
> > translator
> > #1088848 - Spelling errors in rpc/rpc-transport/rdma/src/rdma.c
> > #1089054 - gf-error-codes.h is missing from source tarball
> > #1089470 - SMB: Crash on brick process during compile kernel.
> > #1089934 - list dir with more than N files results in Input/output error
> > #1091340 - Doc: Add glfs_fini known issue to release notes 3.5
> > #1091392 - glusterfs.spec.in: minor/nit changes to sync with Fedora spec
> > #1095775 - Add support in libgfapi to fetch volume info from glusterd.
> > #1095971 - Stopping/Starting a Gluster volume resets ownership
> > #1096040 - AFR : self-heal-daemon not clearing the change-logs of all the
> > sources after self-heal
> > #1096425 - i/o error when one user tries to access RHS volume over NFS with
> > 100+ GIDs
> > #1099878 - Need support for handle based Ops to fetch/modify extended
> > attributes of a file
> > 
> > 
> > Before a final glusterfs-3.5.1 release is made, we hope to have all the
> > blocker bugs fixed. There are currently 13 bugs marked that still need
> > some work done:
> > 
> > #1081016 - glusterd needs xfsprogs and e2fsprogs packages
> > #1086743 - Add documentation for the Feature: RDMA-connection manager
> > (RDMA-CM)
> > #1086749 - Add documentation for the Feature: Exposing Volume Capabilities
> > #1086751 - Add documentation for the Feature: gfid-access
> > #1086754 - Add documentation 

Re: [Gluster-devel] [Gluster-users] glusterfs-3.5.1beta released

2014-05-28 Thread Franco Broi
Hi

My clients are running 3.4.1, when I try to mount from lots of machine
simultaneously, some of the mounts hang. Stopping and starting the
volume clears the hung mounts.

Errors in the client logs

[2014-05-28 01:47:15.930866] E 
[client-handshake.c:1741:client_query_portmap_cbk] 0-data2-client-3: failed to 
get the port number for remote subvolume. Please run 'gluster volume status' on 
server to see if brick process is running.

Let me know if you want more information.

Cheers,

On Sun, 2014-05-25 at 11:55 +0200, Niels de Vos wrote: 
> On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > 
> > SRC: 
> > http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
> 
> This beta release is intended to verify the changes that should resolve 
> the bugs listed below. We appreciate tests done by anyone. Please leave 
> a comment in the respective bugreport with a short description of the 
> success or failure. Visiting one of the bugreports is as easy as opening 
> the bugzilla.redhat.com/$BUG URL, for the first in the list, this 
> results in http://bugzilla.redhat.com/765202 .
> 
> Bugs expected to be fixed (31 in total since 3.5.0):
> 
>  #765202 - lgetxattr called with invalid keys on the bricks
>  #833586 - inodelk hang from marker_rename_release_newp_lock
>  #859581 - self-heal process can sometimes create directories instead of 
> symlinks for the root gfid file in .glusterfs
>  #986429 - Backupvolfile server option should work internal to GlusterFS 
> framework
> #1039544 - [FEAT] "gluster volume heal info" should list the entries that 
> actually required to be healed.
> #1046624 - Unable to heal symbolic Links
> #1046853 - AFR : For every file self-heal there are warning messages reported 
> in glustershd.log file
> #1063190 - [RHEV-RHS] Volume was not accessible after server side quorum was 
> met
> #1064096 - The old Python Translator code (not Glupy) should be removed
> #1066996 - Using sanlock on a gluster mount with replica 3 (quorum-type auto) 
> leads to a split-brain
> #1071191 - [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created with 
> open(), seek(), write()
> #1078061 - Need ability to heal mismatching user extended attributes without 
> any changelogs
> #1078365 - New xlators are linked as versioned .so files, creating 
> .so.0.0.0
> #1086748 - Add documentation for the Feature: AFR CLI enhancements
> #1086750 - Add documentation for the Feature: File Snapshots in GlusterFS
> #1086752 - Add documentation for the Feature: On-Wire 
> Compression/Decompression
> #1086756 - Add documentation for the Feature: zerofill API for GlusterFS
> #1086758 - Add documentation for the Feature: Changelog based parallel 
> geo-replication
> #1086760 - Add documentation for the Feature: Write Once Read Many (WORM) 
> volume
> #1086762 - Add documentation for the Feature: BD Xlator - Block Device 
> translator
> #1088848 - Spelling errors in rpc/rpc-transport/rdma/src/rdma.c
> #1089054 - gf-error-codes.h is missing from source tarball
> #1089470 - SMB: Crash on brick process during compile kernel.
> #1089934 - list dir with more than N files results in Input/output error
> #1091340 - Doc: Add glfs_fini known issue to release notes 3.5
> #1091392 - glusterfs.spec.in: minor/nit changes to sync with Fedora spec
> #1095775 - Add support in libgfapi to fetch volume info from glusterd.
> #1095971 - Stopping/Starting a Gluster volume resets ownership
> #1096040 - AFR : self-heal-daemon not clearing the change-logs of all the 
> sources after self-heal
> #1096425 - i/o error when one user tries to access RHS volume over NFS with 
> 100+ GIDs
> #1099878 - Need support for handle based Ops to fetch/modify extended 
> attributes of a file
> 
> 
> Before a final glusterfs-3.5.1 release is made, we hope to have all the 
> blocker bugs fixed. There are currently 13 bugs marked that still need 
> some work done:
> 
> #1081016 - glusterd needs xfsprogs and e2fsprogs packages
> #1086743 - Add documentation for the Feature: RDMA-connection manager 
> (RDMA-CM)
> #1086749 - Add documentation for the Feature: Exposing Volume Capabilities
> #1086751 - Add documentation for the Feature: gfid-access
> #1086754 - Add documentation for the Feature: Quota Scalability
> #1086755 - Add documentation for the Feature: readdir-ahead
> #1086759 - Add documentation for the Feature: Improved block device translator
> #1086766 - Add documentation for the Feature: Libgfapi
> #1086774 - Add documentation for the Feature: Access Control List - Version 3 
> support for Gluster NFS
> #1086781 - Add documentation for the Feature: Eager locking
> #1086782 - Add documentation for the Feature: glusterfs and  oVirt integration
> #1086783 - Add documentation for the Feature: qemu 1.3 - libgfapi integration
> #1095595 - Stick to IANA standard while allocating brick ports
> 
> A more detailed overview of the status of each of these bugs is here:
> - https://bugzilla.redhat.com/showdependencytree.cgi?id=