Thank you very much for all the efforts.I have deployed a new cluster with 3
servers and used nfs-ganesha instead of the native nfs, so far it's working
fine, also, i tried to reproduce this issue in a test environment but i had no
luck and it just worked as it should be.Do you think i should
Thanks Atin for your answer.
I just tried a "gluster v get cluster.op-version" on my Gluster volume
and
Option Value
-- -
cluster.op-version
On 08/08/2016 01:39 PM, David Gossage wrote:
So now that I have my cluster on 3.7.14 and sharded and working I am
of course looking for what to break next.
Currently each of 3 nodes is on a 6 disk (WD Red 1TB) raidz6 (zil on
mirrored ssd), which I am thinking is more protection than I may
On Mon, Aug 8, 2016 at 4:06 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 9/08/2016 6:39 AM, David Gossage wrote:
>
>> Currently each of 3 nodes is on a 6 disk (WD Red 1TB) raidz6 (zil on
>> mirrored ssd), which I am thinking is more protection than I may need with
>> a 3 way
On Mon, Aug 8, 2016 at 4:23 PM, Joe Julian wrote:
>
>
> On 08/08/2016 01:39 PM, David Gossage wrote:
>
> So now that I have my cluster on 3.7.14 and sharded and working I am of
> course looking for what to break next.
>
> Currently each of 3 nodes is on a 6 disk (WD Red
On Mon, Aug 8, 2016 at 4:37 PM, David Gossage
wrote:
> On Mon, Aug 8, 2016 at 4:23 PM, Joe Julian wrote:
>
>>
>>
>> On 08/08/2016 01:39 PM, David Gossage wrote:
>>
>> So now that I have my cluster on 3.7.14 and sharded and working I am of
>>
On 9/08/2016 6:39 AM, David Gossage wrote:
Currently each of 3 nodes is on a 6 disk (WD Red 1TB) raidz6 (zil on
mirrored ssd), which I am thinking is more protection than I may need
with a 3 way replica. I was going to one by one change them to
basically raid10 letting it heal in between.
On 08/08/2016 02:37 PM, David Gossage wrote:
On Mon, Aug 8, 2016 at 4:23 PM, Joe Julian > wrote:
On 08/08/2016 01:39 PM, David Gossage wrote:
So now that I have my cluster on 3.7.14 and sharded and working I
am of course
On 08/08/2016 02:56 PM, David Gossage wrote:
On Mon, Aug 8, 2016 at 4:37 PM, David Gossage
> wrote:
On Mon, Aug 8, 2016 at 4:23 PM, Joe Julian > wrote:
On
On Mon, Aug 8, 2016 at 5:24 PM, Joe Julian wrote:
>
>
> On 08/08/2016 02:56 PM, David Gossage wrote:
>
> On Mon, Aug 8, 2016 at 4:37 PM, David Gossage > wrote:
>
>> On Mon, Aug 8, 2016 at 4:23 PM, Joe Julian wrote:
>>
On Mon, Aug 8, 2016 at 3:18 PM, Niels de Vos > wrote:
> On Mon, Aug 08, 2016 at 02:37:43PM +0530, Saravanakumar Arumugam wrote:
> >
> > On 08/07/2016 04:17 PM, ML mail wrote:
> > > Hi,
> > >
> > > Can someone explain me what is
So now that I have my cluster on 3.7.14 and sharded and working I am of
course looking for what to break next.
Currently each of 3 nodes is on a 6 disk (WD Red 1TB) raidz6 (zil on
mirrored ssd), which I am thinking is more protection than I may need with
a 3 way replica. I was going to one by
Hi,
I’m doing some benchmarking vs our trial GlusterFS setup (distributed
replicated, 20 bricks configured as 10 pairs). I’m running 3.6.9 currently.
Our benchmarking load involves a large number of concurrent readers that
continuously pick random file / offsets to read. No writes are ever
On 9/08/2016 12:01 AM, Dan Lambright wrote:
By default, files are marked for promotion on the first I/O. So if you did a backup and
touched every file, they would all be marked for promotion. You ought to be able to
change that, so only files touched some number of times (2X?) are promoted.
Hi all,
Currently all the configuration related NFS Ganesha is stored
individually in each
node belong to ganesha cluster at /etc/ganesha. The following are the
files
present in it :
- ganesha.conf - configuration file for ganesha process
- ganesha-ha.conf - configuration file high
HI,
I want to store data which is 135TB, and main requirement is data
redundancy, i can scarify of data performance, but data should not be lost.
can you help me in a design
--
--
Regards.:
SHAM P. ARSIWALA.
RHC{E-A-I}
M.: 9099099855
___
On 08/08/2016 08:59 PM, Atin Mukherjee wrote:
On Mon, Aug 8, 2016 at 3:18 PM, Niels de Vos > wrote:
On Mon, Aug 08, 2016 at 02:37:43PM +0530, Saravanakumar Arumugam
wrote:
>
> On 08/07/2016 04:17 PM, ML
On Mon, Aug 8, 2016 at 9:15 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 9 August 2016 at 07:23, Joe Julian wrote:
> > Just kill (-15) the brick process. That'll close the TCP connections and
> the
> > clients will just go right on functioning off the
On 9 August 2016 at 07:23, Joe Julian wrote:
> Just kill (-15) the brick process. That'll close the TCP connections and the
> clients will just go right on functioning off the remaining replica. When
> you format and recreate your filesystem, it'll be missing the volume-id
Well, i'm not entirely sure it is a setup-related issue. If you have the
steps to recreate the issue, along with the relevant
information about volume configuration, logs, core, version etc, then it
would be good to track this issue through a bug report.
-Krutika
On Mon, Aug 8, 2016 at 8:56 PM,
Hi Sergei,
Will need few information to proceed ahead.
- Is the chown operations were performed on a root directory ?
- Was there any add-brick operation performed before you saw the ownership
getting reverted ?
In the mean time will try to reproduce the issue and get back to you.
Regards,
- Original Message -
> From: "Danny Lee"
> To: gluster-users@gluster.org
> Sent: Thursday, August 4, 2016 1:25:43 AM
> Subject: [Gluster-users] Reconnecting Client to Brick
>
> Hi,
>
> I have a 3-node replicated cluster using the native glusterfs mount, and
>
Is reading the good copies to construct the bad chunk is a parallel or
sequential operation?
Should I revert my 16+4 ec cluster to 8+2 because it takes nearly 7
days to heal just one broken 8TB disk which has only 800GB of data?
On Mon, Aug 8, 2016 at 1:56 PM, Ashish Pandey
Serkan,
Heal for 2 different files could be parallel but not for a single file and
different chunks.
I think you are referring your previous mail in which you had to remove one
complete disk.
In this case heal starts automatically but it scans through each and every
file/dir
to decide if
On Mon, Aug 08, 2016 at 02:37:43PM +0530, Saravanakumar Arumugam wrote:
>
> On 08/07/2016 04:17 PM, ML mail wrote:
> > Hi,
> >
> > Can someone explain me what is the op-version everybody is speaking about
> > on the mailing list?
> op-version is a way to determine which gluster version you are
Hi,
Assume we have 8+2 and 16+4 ec configurations and we just replaced a
broken disk in each configuration which has 100GB of data. In which
case heal completes faster? Does heal speed has anything related with
ec configuration?
Assume we are in 16+4 ec configuration. When heal starts it reads
Hi,
Considering all the other factor same for both the configuration, yes small
configuration
would take less time. To read good copies, it will take less time.
I think, multi threaded shd is the only enhancement in near future.
Ashish
- Original Message -
From: "Serkan Çoban"
On 08/07/2016 04:17 PM, ML mail wrote:
Hi,
Can someone explain me what is the op-version everybody is speaking about on
the mailing list?
op-version is a way to determine which gluster version you are running.
This is quite useful during upgrade process, to check for backward
Hi,
Sorry I haven't had the chance to look into this issue last week. Do you
mind raising a bug in upstream with all
the relevant information and I'll take a look sometime this week?
-Krutika
On Fri, Aug 5, 2016 at 11:58 AM, Mahdi Adnan
wrote:
> Hi,
>
> Yes, i got
- Original Message -
> From: "Mohammed Rafi K C"
> To: "Lindsay Mathieson" , "gluster-users"
>
> Sent: Monday, August 8, 2016 1:03:19 AM
> Subject: Re: [Gluster-users] Tiered Volumes and Backups
>
>
>
> On
30 matches
Mail list logo