We had a connectivity issue on a "tar+ssh" geo-rep link yesterday that
caused a lot of issues. When the link came back up it immediately went
into a "faulty" state, and the logs were showing "Operation not permitted"
and "File Exists" errors in a loop.
We were finally able to get things back on
- Original Message -
> From: "Pranith Kumar Karampuri"
> To: "Ben Turner" , "Humble Devassy Chirammal"
> , "Atin Mukherjee"
>
> Cc: "gluster-users"
> Sent:
On 10/14/2015 07:02 PM, Игорь Бирюлин wrote:
Hello,
today in my 2 nodes replica set I've found split-brain. Command 'ls'
start told 'Input/output error'.
What does the mount log (/var/log/glusterfs/.log) say
when you get this error?
Can you run getfattr as root for the file from *both*
On 10/14/2015 10:05 PM, Игорь Бирюлин wrote:
Thanks for your replay.
If I do listing in mount point (/repo):
# ls /repo/xxx/keyrings/debian-keyring.gpg
ls: cannot access /repo/xxx/keyrings/debian-keyring.gpg: Input/output
error
#
In log /var/log/glusterfs/repo.log I see:
[2015-10-14
Hi Pranith,
Will this patch improve the heal performance on distributed disperse
volume?. Currently we are getting 10MB/s heal performance on 10G backed
network. SHD daemon takes 5 days to complete the heal operation for single
4TB( 3.5 TB data) disk failure.
Regards,
Backer
On Wed, Oct 14,
Thanks for your replay.
If I do listing in mount point (/repo):
# ls /repo/xxx/keyrings/debian-keyring.gpg
ls: cannot access /repo/xxx/keyrings/debian-keyring.gpg: Input/output error
#
In log /var/log/glusterfs/repo.log I see:
[2015-10-14 16:27:36.006815] W [MSGID: 108008]
is anyone has any thoughts on this?
Sateesh
On Tue, Oct 13, 2015 at 5:44 PM, satish kondapalli
wrote:
> Hi,
>
> I want to mount gluster volume as a root file system for my node.
>
> Node will boot from network( only kernel and initrd images) but my root
> file system
Thanks for detailed description.
Do you have a plans add resolution GFID split-brain by 'gluster volume heal
VOLNAME split-brain ...' ?
What the main different between GFID split-brain and data split brain? On
nodes this file absolutely different by data content and size or it isn't
'data' in
Expandable storage for small blobs that we don't want to store in a
database.
On Tue, Oct 13, 2015, 11:06 PM M.Tarkeshwar Rao
wrote:
> Hi,
>
> Can you please suggest uses of Gluster FS in Production(Companies)?
> is it stable there?
>
> Regards
> Tarkeshwar
>
Hi,
Can you please suggest uses of Gluster FS in Production(Companies)?
is it stable there?
Regards
Tarkeshwar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi all,
I'm pleased to announce the release of GlusterFS-3.7.5. This release
includes 70 changes after 3.7.4. The list of fixed bugs is included
below.
Tarball and RPMs can be downloaded from
http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.5/
Ubuntu debs are available from
On 14 October 2015 at 15:17, Pranith Kumar Karampuri
wrote:
> I didn't understand the reason for recreating the setup. Is upgrading
> rpms/debs not enough?
>
> Pranith
>
The distro I'm using (Proxmox/Debian) broke backward compatibility with
their latest major upgrade,
Hi All,
In 30 minutes from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")
- agenda:
Hi Serkan,
On 13/10/15 15:53, Serkan Çoban wrote:
Hi Xavier and thanks for your answers.
Servers will have 26*8TB disks.I don't want to loose more than 2 disk
for raid,
so my options are HW RAID6 24+2 or 2 * HW RAID5 12+1,
A RAID5 of more than 8-10 disks is normally considered unsafe because
On 10/15/2015 12:46 AM, Lindsay Mathieson wrote:
On 14 October 2015 at 15:17, Pranith Kumar Karampuri
> wrote:
I didn't understand the reason for recreating the setup. Is
upgrading rpms/debs not enough?
Pranith
The distro I'm
On 10/14/2015 10:43 PM, Mohamed Pakkeer wrote:
Hi Pranith,
Will this patch improve the heal performance on distributed disperse
volume?. Currently we are getting 10MB/s heal performance on 10G
backed network. SHD daemon takes 5 days to complete the heal operation
for single 4TB( 3.5 TB
I'd recommend Proxmox as vritualizing platform. In new version (4) HA works
with few clicks and no need for external fencing devices (all is done by
the watchdog from now on). Also it runs fine with GlusterFS as VM storage
(I'm running about 20 VMs / KVM / on gluster and thinking of moving 10 more
LS,
I recently reconfigured one of my gluster nodes and forgot to update the MTU
size on the switch while I did configure the host with jumbo frames.
The result was that the complete cluster had communication issues.
All systems are part of a distributed striped volume with a replica size of 2
Hello,
today in my 2 nodes replica set I've found split-brain. Command 'ls' start
told 'Input/output error'.
But command 'gluster v heal VOLNAME info split-brain' does not show problem
files:
# gluster v heal repofiles info split-brain
Brick dist-int-master03.xxx:/storage/gluster_brick_repofiles
Hi Xavier,
>I'm not sure if I understand you. Are you saying you will create two separate
>gluster volumes or you will add both bricks to the same distributed-dispersed
>volume ?
Is adding more than one brick from same host to a disperse gluster
volume recommended? I meant two different
Admittedly an odd case, but...
o I have simple a simple geo-replication setup: master -> slave.
o I've mounted the master's volume on the master host.
o I've also setup rsyncd server on the master:
[master-volume]
path = /mnt/master-volume
read only = false
o I now rsync
21 matches
Mail list logo