Niels,
Thanks for your answer. Can you look at the du examples below. Right now I am
concerned with gluster0:group0 and group1
They are not replicating properly. They are supposed to replicate across 3 of
my 5 nodes. Not shown here are nodes 2 and 3.
Thanks!
root@node0:/data/brick1# du
And now I have it all setup for logging etc I can't reproduce the error :(
Though I did manage to score a "volume rebalance: teststore1: failed:
Another transaction is in progress for teststore1. Please try again
after sometime" problem. No gluster commands would work after that. I
had to restart
Thanks for the quick response here.
How does this make it to a release? Should I hope for it in 3.8.6?
> On Oct 20, 2016, at 11:48 AM, Jiffin Tony Thottan wrote:
>
>
>
> On 19/10/16 20:54, Jackie Tung wrote:
>> Thanks Jiffin, filed
On 19/10/16 20:54, Jackie Tung wrote:
Thanks Jiffin, filed https://bugzilla.redhat.com/show_bug.cgi?id=1386766
In my limited knowledge of the original reasons for 1GB hardcode,
either removing the limit altogether, or an additional “override"
option parameter would be preferable in my
1.
sudo getfattr -d -m. -e hex
/gluster/site.com/wordpress/wp-content/plugins/gravityforms/includes/fields/class-gf-field-calculation.php
getfattr: Removing leading '/' from absolute path names
# file:
yes
i check the trusted.gfid xattr for luo on all bricks:
as below:
--
getfattr -m . -d -e hex /data2/gluster/video/luo
getfattr: Removing leading '/' from absolute path names
# file: data2/gluster/video/luo
trusted.gfid=0x5383184bd9df49e580da03a1b8fd1105
Hi All,
Our weekly community meetings have become mainly one hour of status
updates. This just drains the life out of the meeting, and doesn't
encourage new attendees to speak up.
Let's try and change this. For the next meeting lets try skipping
updates all together and instead just dive into
[from
http://blog.nixpanic.net/2016/10/glusterfs-385-is-ready-for-consumption.html]
An other month, an other GlusterFS 3.8 update! We're committed to fix
reported bugs in the 3.8 Long-Term-Maintenance version, with monthly
releases. Here is glusterfs-3.8.5 for increased stability. Packages for
Thanks a lot, Lindsay! Appreciate the help.
It would be awesome if you could tell us whether you
see the issue with FUSE as well, while we get around
to setting up the environment and running the test ourselves.
-Krutika
On Thu, Oct 20, 2016 at 2:57 AM, Lindsay Mathieson <
Hi All,
The GoGFAPI Go package is now a Gluster project [1]!
I'd created the github.com/kshlm/gogfapi/gfapi package over 3 years
ago, as a learning project to learn Go.
Since then, the project has been moving slowly, and found some users
and contributors. There are still TODOs left to be
On Thu, Oct 20, 2016 at 10:47:49AM +0200, Josep Manel Andrés wrote:
> Thanks Kevin,
>
> hahah, you are right, it works for me now, but it may not work in the
> future when running tomcat on it, tomcat may need files from gluster
> before it is mounted.
>
Yep, took us a while to find a
Hi,
This is scary stuff. While not as scary, you might confirm a bug that I
reported a while back on your test systems:
https://bugzilla.redhat.com/show_bug.cgi?id=1370832
Cheers,
Hans Henrik
On 19-10-2016 08:40, Krutika Dhananjay wrote:
Agreed.
I will run the same test on an actual vm
Thanks Kevin,
hahah, you are right, it works for me now, but it may not work in the
future when running tomcat on it, tomcat may need files from gluster
before it is mounted.
So, I will go for AutoFS.
Thanks a lot guys! ;)
On 20/10/16 10:44, Kevin Lemonnier wrote:
On Thu, Oct 20, 2016 at
On Thu, Oct 20, 2016 at 10:38:52AM +0200, Josep Manel Andrés wrote:
> Right, Client and server are at the same box. Doesn't really understand
> what autofs does exactly ... but, isn't it the same to use:
>
> sleep 10 && mount /gluster
>
> on rc.local ?
>
More or less, yes.
AutoFS mounts the
Right, Client and server are at the same box. Doesn't really understand
what autofs does exactly ... but, isn't it the same to use:
sleep 10 && mount /gluster
on rc.local ?
Cheers!
On 20/10/16 09:09, Kevin Lemonnier wrote:
On Wed, Oct 19, 2016 at 06:17:07PM +0200, Josep Manel Andrés wrote:
On Wed, Oct 19, 2016 at 06:17:07PM +0200, Josep Manel Andrés wrote:
> Hi ,
> I am trying to mount a volume during boot time, here are the logs from
Is your client also your server ? If yes, the problem is that gluster
hasn't started yet when fstab is processed, we use autofs as a workaround.
opsld04:/etc/tls/certs # cat /etc/fstab
UUID=9ad7cc4f-cafd-49fc-b0b0-4e68e118db46 swap swap
defaults 0 0
UUID=3307f813-d80b-49fa-a5f2-bc7b6b6ebdbc /ext4
acl,user_xattr1 1
UUID=6E03-6160 /boot/efivfat
Hi All,
We have kept our official Gluster Container images in Docker hub for
CentOS and Fedora distros for some time now.
https://hub.docker.com/r/gluster/gluster-centos/
https://hub.docker.com/r/gluster/gluster-fedora/
I see massive increase in the download of these container images for past
On Thu, Oct 20, 2016 at 10:11:33AM +1000, Lindsay Mathieson wrote:
> On 18/10/2016 6:38 PM, Niels de Vos wrote:
> > Nice!
>
> Thanks! I had reason to restart a node this morning, I've attached a log
> graph. And I get emails like the following:
>
>Subject: PROBLEM: Heal 200
>
>Trigger:
19 matches
Mail list logo