Anand Avati anand.avati@... writes:
That does not seem to be the complete call trace. Can you post the complete
trace? This seems like a kernel bug strangely triggered by gluster
Avati
Sorry for my late response, correct about trace, only had a visual dump and
could not scroll back.
Hello,
I have recently set up gluster-3.1.2 as NFS Virtual Disk Storage for
XenServer.
I have ran a windows vm on it and tested the disk performance:
Read: 100MB/s
Write: 10MB/s
While with standard NFS, on the same servers, we can achieve:
Read: 115MB/s
Write: 100MB/s
We have two servers
hi!
here is what you want to have.
I hope it helps to solve the problem.
there are two different outputs from different gdb commands:
1.) gbd
(gdb) core /core
(gdb) thread apply all bt full
2.) gdb -batch -ex core /core -ex info sharedlib
output 1.)
virt-zabbix-02:~ # gdb
GNU gdb (GDB)
Shehjar Tikoo wrote:
What gluster config are you using?
My bad.. I didnt read the mail completely. First thing you should try is
run a streaming IO write perf test using dd or iozone. Lets see how that
performs over the replicated config. Thanks
Stefano Baronio wrote:
Hello,
I
Hi Markus,
Unfortunately the executable does not contain any debug symbols. Is it possible
for you to rebuild glusterfs with debugging enabled and rerun the tests? You
can do,
#export CFLAGS='-g3 -O0'
before running configure script.
regards,
raghavendra.
- Original Message -
From:
Hi all,
Which version is recommended for production use: 3.1.1 or 3.1.2?? It will be
installed under two CentOS 5.x hosts and I will use to provide storage space for two
ESXi hosts.
On the other hand, when RHEL6 will be supported??
Many thanks.
--
CL Martinez
carlopmart {at} gmail {d0t}
hi!
I added the compiler flags into the SPEC file and compiled again, removed the old RPMs and deleted
the whole /etc/gluser* dirs, configfiles and logfiles
then I installed the new RPMs and started glusterd and ran the mgmt commands -
the strange thing:
NOW IT WORKS!!!
no segfault, no
Hi Markus,
Seems like you had stale installations of gluster and libraries of different
versions got mixed up there by causing crashes.
regards,
- Original Message -
From: Markus Fröhlich markus.froehl...@xidras.com
To: Raghavendra G raghaven...@gluster.com
Cc:
Hi all,
I currently have a setup with 30 distributed bricks running Glusterfs 3.1.
Performance under heavy loads is not good, so I want to see if shrinking the
volume will remedy this. However, it is not clear to me from the documentation
how to do this with the volume online without data
hi!
I don't think so - I've always built and installed RPMs instead of running make
install.
so when I uninstall a package, all files / librariers should get removed.
the only files that are existing after uninstall are the configs under /etc and
the logfiles.
I now compiled and installed v
On 01/26/2011 07:25 PM, David Lloyd wrote:
Well, I did this and it seems to have worked. I was just guessing really,
didn't have any documentation or advice from anyone in the know.
I just reset the attributes on the root directory for each brick that was
not all zeroes.
I found it easier to
Hello.
I have experienced this situation with the 3.0.4 release of Glusterfs - it was
related to a bug that had to do with recursive file deletions (in my case).
That bug has been fixed in 3.1.1 which is what I am currently running.
Can you give us your Glusterfs version, and a copy of your
Yes, it seemed really dangerous to me too. But with the lack of
documentation, and lack of response from gluster (and the data is still on
the old system too), I thought I'd give it a shot.
Thanks for the explanation. The split-brain problem seems to come up fairly
regularly, but I've not found
David,
The problem what you are facing is something we are already investigating.
We still haven't root-caused it yet, but from what we have seen this happens
only on / and only for metadata changelog. This shows up as just annoying
logs but it should not affect your functionality.
Avati
On
Here are the tests:
Gluster 3.1.2 servers are CentOS with 4Gb RAM and RAID1 (system) +
RAID5 3HD 74Gb
Tests ran from XenServer Dom0 (CentOS like system)
NFS standard
Write:
time sh -c dd if=/dev/zero
of=/var/run/sr-mount/db70c449-9ff5-e28c-73ec-25f4fc0b07cf/test/file1.raw
bs=512KB count=9765
Dear gluster-users@gluster.org,
You have been invited by your friend Mukund Buddhikot to participate in
a research program. Currently there are
companies that are looking for individuals who are interested in reviewing and
testing the new Apple iPad applications and games.
After the
16 matches
Mail list logo