[Gluster-devel] When inode table is populated?
Hi, When we were trying to call rename from translator (in reconfigure) using STACK_WIND , inode table(this->itable) value seems to be null. Since inode is required for performing rename, When will inode table gets populated and Why it is not populated in reconfigure or init? Or should we create a private inode table and generate inode using it? -Jiffin ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] How should I submit a testcase without proper solution
hi Anders, Generally new test cases are submitted along with the fix to prevent this situation. You should either submit the fix along with the test case or wait until the fix is submitted by someone else in case you are not actively working on it and then we can re-trigger the regression build for this patch. Then it will be taken in. Pranith On 07/29/2014 08:51 PM, Anders Blomdell wrote: Hi, finally got around to look into "Symlink mtime changes when rebalancing" (https://bugzilla.redhat.com/show_bug.cgi?id=1122443), and I have submitted a test-case (http://review.gluster.org/#/c/8383/), but that is expected to fail (since I have not managed to write a patch that adresses the problem), and hence it will be voted down by Jenkins, is there something I should do about this? /Anders ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Issues with IBoIP and 1GBe enviroments
I am having some issues with gluster 3.4.4 and a mixed IB and 1GBe environment. I want the servers in the cluster to do all their inter server communication over IPoIB. The clients need to be able to connect over IPoIB or 1GBe depending on their server configuration. When I peer probe the cluster using the IPoIB addresses I can only mount using the glusterFS client over IPoIB nodes. When mounting with nodes that only 1GBe to the 1GBe address the mount fails. The same nodes are able to mount the same volume over NFS over 1GBe. However, when using NFS, the automatic fail over doesn't work. When I peer probe the same servers using the 1GBe addresses, I can mount over IB,1GBe,NFS,GFS it all works except inter server communication seems to be capped at 1GBe speeds. Any thoughts? Should the automatic fail over work in NFS? Should I be able to mount over GFS using IB and 1GBe addresses? -- V/r Brian P Spallholtz ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] Documentation on 'gluster volume replace-brick' out of date?
While looking into "Symlink mtime changes when rebalancing" (https://bugzilla.redhat.com/show_bug.cgi?id=1122443), I found that it looks like the documentation on 'gluster volume replace-brick' is out of date (v3.7dev-34-g67a6f40), gluster.8 says: volume replace-brick ( ) start|pause|abort|status|commit Replace the specified brick. while 'gluster volume replace-brick volname old-brick new-brick start' says: All replace-brick commands except commit force are deprecated. Do you want to continue? (y/n) Am I right to assume that the correct way migrate data to a new brick, is: # gluster volume add-brick volname new-brick # gluster volume remove-brick volname old-brick start ... wait for completion, would maybe be nice to have a ... gluster volume volname wait # gluster volume remove-brick volname old-brick commit Removing brick(s) can result in data loss. Do you want to Continue? (y/n) ... Would be nice if the fact that migration is complete was reflected in ... the dialog. AFAICT this also means that there is no way to replace a single brick in a replicated volume in such a way that all old brick replicas are online until the new brick is fully healed/populated, meaning that with replica a count 2, we only have one active replica until healing is done. /Anders -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O. Box 118 Fax: +46 46 138118 SE-221 00 Lund, Sweden ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] How should I submit a testcase without proper solution
Hi, finally got around to look into "Symlink mtime changes when rebalancing" (https://bugzilla.redhat.com/show_bug.cgi?id=1122443), and I have submitted a test-case (http://review.gluster.org/#/c/8383/), but that is expected to fail (since I have not managed to write a patch that adresses the problem), and hence it will be voted down by Jenkins, is there something I should do about this? /Anders -- Anders Blomdell Email: anders.blomd...@control.lth.se Department of Automatic Control Lund University Phone:+46 46 222 4625 P.O. Box 118 Fax: +46 46 138118 SE-221 00 Lund, Sweden ___ Gluster-devel mailing list Gluster-devel@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-devel