Hey folks,
The network timeout setting (45 secs), which allows a node to determine if
another has failed...Is this also invoked when a node is gracefully
shutdown/rebooted? I had initially thought it was to deal with a hard failure,
however, even rebooting a node for maintenance causes the
Hi Paul all
the workaround you found is working for me too.
I also would like to try libgfapi but failed to recompile qemu.Although
http://gluster.org/community/documentation/index.php/Building_QEMU_with_gfapi_for_Debian_based_systems
is pretty accurate I ended up with a missing pxe support
Agenda for the weekly community meeting has been updated at:
http://titanpad.com/gluster-community-meetings
Please update the agenda if you have items for discussion.
Cheers,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
Could you take statedump of bricks and get that information please.
You can use
https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/sect-User_Guide-Monitor_Workload-Performing_Statedump.html
for taking statedumps.
Is this same as
Could you please also capture fop profiling output using the following link
please:
http://gluster.org/community/documentation/index.php/Gluster_3.2:_Running_Gluster_Volume_Profile_Command
That should give what fops are executing and what are their latencies which
will be helpful.
Pranith
I am unable to make it, because I have to leave for the airport shortly. I
understand this means I might get tasked with more to do's than usual :-)
-JM
On Jan 29, 2014 6:13 AM, Vijay Bellur vbel...@redhat.com wrote:
Agenda for the weekly community meeting has been updated at:
On 01/03/2014 04:40 PM, Niels de Vos wrote:
If you are interested in joining us, please let us know by responding to this
email with some details, or add your note to the TitanPad[2]. In case you want
to discuss a specific topic or would like to see a certain GlusterFS
use-case/application,
After concluding that:
Replica 2 + quorum = Low Availability
Replica 2 - quorum = split brain (learned the hard way)
I switched to replica 3 for my oVirt 2 host setup.
Since I have only 2 hosts under the control of oVirt (which wants a LOT of
control), I ended up with
Replica 3 + quorum
Brick
On 01/29/2014 08:13 PM, John Walker wrote:
I am unable to make it, because I have to leave for the airport shortly. I
understand this means I might get tasked with more to do's than usual :-)
Of course, we paid special attention in the meeting to ensure that you
have more todos :-).
Hi all,
We are putting a lot of data in to a distributed replicated gluster, about
600 to 640 GB a day.
I need to ad more brick's, so can one re-balance and put data in the same
time.
Regards
William John van Jaarsveldt
Technology and Information Systems Operations Manager
Tel: 011 467 1677
On 01/29/2014 04:42 PM, Vijay Bellur wrote:
Agenda for the weekly community meeting has been updated at:
http://titanpad.com/gluster-community-meetings
Please update the agenda if you have items for discussion.
Meeting minutes available here:
Hi all,
We are putting a lot of data in to a distributed replicated gluster, about
600 to 640 GB a day.
I need to ad more brick's, so can one re-balance and put data in the same
time.
Regards
William John van Jaarsveldt
Technology and Information Systems Operations Manager
Tel: 011 467 1677
I would be disappointed otherwise :-)
On Jan 29, 2014 11:13 AM, Vijay Bellur vbel...@redhat.com wrote:
On 01/29/2014 08:13 PM, John Walker wrote:
I am unable to mOn 01/29/2014 08:13 PM, John Walker wrote:
I am unable to make it, because I have to leave for the airport shortly. I
Hi Joe,
Sorry to take so long in responding, but we had another emergency
that took all my time...
The subsampled brick log file from gluster-0-1 is available at
http://mseas.mit.edu/download/phaley/GlusterUsers/gluster-0-1/bricks/mseas-data-0-1.log.1
The df results on gluster-0-1 are
Hi Anirban,
Thanks for taking the time off to file the bugzilla bug report. The fix
has been sent for review upstream (http://review.gluster.org/#/c/6862/).
Once it is merged, I will backport it to 3.4 as well.
Regards,
Ravi
On 01/28/2014 02:07 AM, Chalcogen wrote:
Hi,
I am working on a
When I triedto execute the statedump, the brick server of the BAD node
crashed.
On Wed, Jan 29, 2014 at 8:13 PM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
Could you take statedump of bricks and get that information please.
You can use
Could you give us the back trace
Pranith
- Original Message -
From: Mingfan Lu mingfan...@gmail.com
To: Pranith Kumar Karampuri pkara...@redhat.com
Cc: haiwei.xie-soulinfo haiwei@soulinfo.com,
Gluster-users@gluster.org List gluster-users@gluster.org
Sent: Thursday, January 30,
17 matches
Mail list logo