Hi,
The basic patches for md-cache and integrating it with cache-invalidation is
merged in master. You could try master build and enable the following settings,
to see if there is any impact on tiering performance at all:
# gluster volume set performance.stat-prefetch on
# gluster volume se
Hi,
Because of the bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1350880
https://bugzilla.redhat.com/show_bug.cgi?id=1352482
3.7.11 is not a good version to be on for Qemu use case, please update to
3.7.13, which has fix for both the bugs.
W.R.T your original permission denied issue, p
Glusterfs ver 3.8.3
Source build
Host : Ubuntu 14.04 (32 bit)
Error:
gluster: symbol lookup error: gluster: undefined symbol: use_spinlocks
Was previously using 3.7.6 without any issues.
Some dependencies are not updated properly . Can someone help ?
Best Regards
JK
_
hi Aravinda,
I was wondering what is your opinion in sending selected logs as
events instead of treating them specially. Is this something you guys
considered? Do you think it is a bad idea to do it that way? We can even
come up with a new api which logs and then sends it as event.
--
Pran
On Fri, Aug 12 , 2016 at 03:48:49PM -0400, Vijay Bellur wrote:
..
> If you have a talk/discussion proposal that can be part of these themes,
> please send out your proposal(s) by replying to this thread. Please clearly
> mention the theme for which your proposal is relevant when you do so. We
On Sat, Aug 20, 2016 at 5:49 AM, Jeff Darcy wrote:
> For those who are interested, here's the current development status.
>
> The good news is that the current patch[1] works well enough for almost
> all of the basic tests and 22/32 of the basic/afr tests to run
> successfully. The exceptions ha
I would like to follow up on a previous thread. I have here 3 machines
running Ubuntu. All were running 14.04 LTS and of these two have been
upgraded to 16.04. They all run QEMU with a shared GlusterFS mount for
storing VM images. Libgfapi was configured and running on all hosts with
14.04 but has
Here's one from me:
Sharding in GlusterFS - Past, Present and Future
I intend to cover the following in this talk:
* What sharding is, what are its benefits over striping and in general..
* Current design
* Use cases - VM image store/HC/ROBO
* Challenges - atomicity, synchronization across multi
On Mon, Aug 22, 2016 at 5:15 PM, Jeff Darcy wrote:
> Two proposals, both pretty developer-focused.
>
> (1) Gluster: The Ugly Parts
> Like any code base its size and age, Gluster has accumulated its share of
> dead, redundant, or simply inelegant code. This code makes us more
> vulnerable to bugs
Not a bad idea for a workaround, but that would require significant
investment with our current setup. All of our compute nodes are stateless /
have no disks. All storage is network storage. It's probably still not
feasible if we added disks because some simulations produce terabytes of
data. We wo
[1] and [2], please. Those are 2 parts of one fix that is backported
from master. They are already backported to 3.8, so only backport to 3.7
is left.
Regards,
Oleksandr
[1] http://review.gluster.org/#/c/14835/
[2] http://review.gluster.org/#/c/15167/
22.08.2016 15:25, Kaushal M wrote:
No
Hi all.
We have 1 more week till the scheduled 30th August release date for
GlusterFS-3.7.15.
Till today, 34 new commits have been merged into release-3.7 since the
tagging of 3.7.14. Gerrit has ~30 open patches on release-3.7 [1],
about 10 of which have been submitted after 3.7.14.
Notify the m
Let's try this again.
We are doing a final screening of the 3.6 bug list after the next
bug-triage meeting (1200UTC 23 Aug 2016, ie tomorrow).
All maintainers are requested to attend this meeting and screen bugs
for their components. The list of bugs is available at [1]. Bugs that
have been stric
Two proposals, both pretty developer-focused.
(1) Gluster: The Ugly Parts
Like any code base its size and age, Gluster has accumulated its share of dead,
redundant, or simply inelegant code. This code makes us more vulnerable to
bugs, and slows our entire development process for any feature. I
On Mon, Aug 22, 2016 at 1:03 PM, Nigel Babu wrote:
> Note
> * A bunch of regressions are not included since we switched job names
> * NetBSD had a lot of aborted runs this past week.
>
> *16* of *89* regressions failed
>
> *./tests/basic/afr/add-brick-self-heal.t;* Failed *6* times
>
> Regression
On Fri, Aug 12, 2016 at 03:48:49PM -0400, Vijay Bellur wrote:
..
> If you have a talk/discussion proposal that can be part of these themes,
> please send out your proposal(s) by replying to this thread. Please clearly
> mention the theme for which your proposal is relevant when you do so. We
> will
22.08.2016 10:34, Nigel Babu wrote:
./TESTS/BASIC/GFAPI/GFAPI-TRUNC.T; Failed 6 times
Fixed: http://review.gluster.org/#/c/15223/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
On Mon, Aug 22, 2016 at 03:33:18PM +0800, jayakrishnan mm wrote:
> Glusterfs 3.7.6
> Host: x86_64-linux (both client & Server)
>
> Volume : Disperse
>
> Create volume is success. But when I am unable to start the volume.
>
>
> Brick log says libgfdb.so.0 can't be opened. How can I install th
On Mon, Aug 22, 2016 at 1:04 PM, Nigel Babu wrote:
> Note: A few failures don't show up because we switched job names.
>
>
> *50* of *102* regressions failed
>
> *./tests/basic/afr/add-brick-self-heal.t;* Failed *1* times
>
> Regression Link: http://build.gluster.org/job/centos6-regression/5/
> c
Note: A few failures don't show up because we switched job names.
*50* of *102* regressions failed
*./tests/basic/afr/add-brick-self-heal.t;* Failed *1* times
Regression Link:
http://build.gluster.org/job/centos6-regression/5/consoleText
Node: slave21.cloud.gluster.org
*./tests/basic/afr/entr
Note
* A bunch of regressions are not included since we switched job names
* NetBSD had a lot of aborted runs this past week.
*16* of *89* regressions failed
*./tests/basic/afr/add-brick-self-heal.t;* Failed *6* times
Regression Link:
http://build.gluster.org/job/netbsd7-regression/66/consoleTex
Glusterfs 3.7.6
Host: x86_64-linux (both client & Server)
Volume : Disperse
Create volume is success. But when I am unable to start the volume.
Brick log says libgfdb.so.0 can't be opened. How can I install this ?
There is no mention about such lib in the build requirements
(
https://glust
22 matches
Mail list logo