I was planning on updating RHV as well.
Just in order of needed tasks to move some physical servers around wasn't
sure if I would get to that first.
Maybe I should go ahead and just do that though now.
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
Pronouns
have experience with this recently?
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
Pronouns: he, him, his
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster
it?
Centos 7 servers gluster 3.8.3 (yeah I need to update again)
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Init.d/system.d script doesn't kill gluster automatically on
> reboot/shutdown?
>
> Sounds less like an issue with how it's shutdown but an issue with how
it's mounted perhaps. My gluster fuse
noticed any bugs or issues using
teaming.
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Sat, Jun 17, 2017 at 2:59 PM, wk <wkm...@bneit.com> wrote:
> I'm looking at tuning up a new site and the bonding issue came up
>
> A google
On Thu, Feb 23, 2017 at 3:57 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> How much RAM is suggested for gluster with ZFS (no dedup) ?
>
> Are 16GB enough with an SSD L2ARC?
>
> 8gb for the arc, 8gb for gluster and OS.
>
That's what my systems run though each of my nodes
On Wed, Feb 22, 2017 at 9:29 AM, Alessandro Briosi wrote:
> Il 22/02/2017 13:54, Gandalf Corvotempesta ha scritto:
> > I don't think would be possible because is the client that write on
> > all server
> > The replication is made by the client, not by the server
>
>
> I really
On Sat, Dec 24, 2016 at 5:58 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 24/12/2016 2:40 AM, Ivan Rossi wrote:
>
>> This is a RESILIENT system, in my book.
>>
>> Gluster people, despite the constant stream of problems and requests
>> for help that you see on the ML and IRC,
On Mon, Dec 5, 2016 at 4:53 AM, Momonth wrote:
> Hi All,
>
> I've just joined this list as I'm working on a project and looking for
> a persistent and shared storage for docker based infra. I'm entirely
> new to the GlusterFS project, however have been involved into "storage
>
On Thu, Nov 17, 2016 at 6:42 PM, Olivier Lambert
wrote:
> Okay, used the exact same config you provided, and adding an arbiter
> node (node3)
>
> After halting node2, VM continues to work after a small "lag"/freeze.
> I restarted node2 and it was back online: OK
>
>
On Mon, Nov 14, 2016 at 8:54 AM, Niels de Vos wrote:
> On Mon, Nov 14, 2016 at 04:50:44PM +0530, Pranith Kumar Karampuri wrote:
> > On Mon, Nov 14, 2016 at 4:38 PM, Gandalf Corvotempesta <
> > gandalf.corvotempe...@gmail.com> wrote:
> >
> > > 2016-11-14 11:50 GMT+01:00 Pranith
On Sun, Nov 13, 2016 at 6:35 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> As discussed recently, it is way to easy to make destructive changes
> to a volume,e.g change shard size. This can corrupt the data with no
> warnings and its all to easy to make a typo or access the wrong
On Sat, Nov 12, 2016 at 2:11 PM, Kevin Lemonnier
wrote:
> >
> > On the other hand at home, I tried to use GlusterFS for VM images in a
> > simple replica 2 setup with Pacemaker for HA. VMs were constantly
> > failing en masse even without making any changes. Very often the
On Sat, Nov 12, 2016 at 7:42 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 12 nov 2016 14:27, "Lindsay Mathieson"
> ha scritto:
> >
> > gluster volume reset *finger twitch*
> >
> >
> > And boom! volume gone.
> >
>
> There are too many
On Thu, Nov 10, 2016 at 1:24 PM, Alexandr Porunov <
alexandr.poru...@gmail.com> wrote:
> Hello,
>
> I am following Quick Start Guide. I have installed epel-release-7.8 but I
> can not install glusterfs. What shell I do?
>
> Here is the output:
> # yum install glusterfs-server
> Loaded plugins:
On Sun, Nov 6, 2016 at 3:24 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 06/11/2016 03:37, David Gossage ha scritto:
>
> The only thing you gain with raidz1 I think is maybe more usable space.
> Performance in general will not be as good, an
On Sat, Nov 5, 2016 at 6:20 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-11-05 12:06 GMT+01:00 Lindsay Mathieson >:
> > Yah, I get that. For me willing to risk loosing the entire gluster node
> and
> > having to resync it, I see the
On Tue, Nov 1, 2016 at 9:59 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Just an update - after resetting all the heal "optimisations" :) I set, in
> general heals are much faster and back to normal. I've done several rolling
> upgrades with the servers since, rebooting each one
On Sun, Oct 30, 2016 at 8:24 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> I've added an ssd to a zpool to be used as zil and l2arc cache. (One
> partition for each service)
>
> Seems to be unused. I've tried with arcstat.sh script and i don't see any
> increasing usage
off top of my head I think zfs get all will get all settings for all
pools/vols etc..
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Sun, Oct 30, 2016 at 8:46 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 30 ott 2
On Sun, Oct 30, 2016 at 7:46 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Ok, i've killed the glusterfsd process, done some ZFS maintenance task
> and now I would like to re-add the same brick to the volume.
>
> How? How can I restart the glusterfsd brick process?
>
If
Sorry to resurrect an old email but did any resolution occur for this or a
cause found? I just see this as a potential task I may need to also run
through some day and if their are pitfalls to watch for would be good to
know.
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office
On Wed, Oct 12, 2016 at 6:39 AM, Kevin Lemonnier
wrote:
> >
> > imho GlusterFS is not the best place for MySQL.
> >
> > Maybe you want to consider using Galera Cluster with Maxscale.
> >
>
> No, the point isn't to replicate MySQL. It's to have highly available
> VMs that
On Wed, Oct 5, 2016 at 2:14 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-10-05 20:50 GMT+02:00 David Gossage <dgoss...@carouselchecks.com>:
> > The mirrored slog will be useful. Depending on what you put on the pool
> > l2arc may not get
On Wed, Oct 5, 2016 at 1:29 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 30 set 2016 1:46 PM, "Gandalf Corvotempesta" <
> gandalf.corvotempe...@gmail.com> ha scritto:
>
> > I was thinking about creating one or more raidz2 to use as bricks, with
> 2 ssd. One small
On Mon, Sep 26, 2016 at 5:18 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-09-21 1:04 GMT+02:00 Pranith Kumar Karampuri :
> > I am not sure about this one, I was waiting for someone else to respond
> on
> > this point.
>
> No one ?
> And what
On Tue, Sep 6, 2016 at 11:41 AM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
>
>
> On Tue, Sep 6, 2016 at 7:27 PM, David Gossage <dgoss...@carouselchecks.com
> > wrote:
>
>> Going to top post with solution Krutika Dhananjay came up with. His
>> steps we
progress and drink beer while you wait and hope
nothing blows up
watch -n 10 gluster v heal VOLNAME statistics heal-count
10) unmount gluster network mount from server
umount /mnt-brick-test
11) Praise the developers for their efforts
*David Gossage*
*Carousel Checks Inc. | System Administrator
On Tue, Sep 6, 2016 at 7:29 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Anybody?
>
>
While I have not tested it yet the 2 email chains I have seen from users
trying it is that the performance has been worse rather than any increased
benefit. Perhaps those using it
On Thu, Sep 1, 2016 at 12:09 AM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
>
>
> On Wed, Aug 31, 2016 at 8:13 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> Just as a test I did not shut down the one VM on the cluster as finding a
>>
=0x5889332e50ba441e8fa5cce3ae6f3a15
user.some-name=0x736f6d652d76616c7565
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Wed, Aug 31, 2016 at 9:43 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> Just as a test I did not shut down the one VM on th
the values returned
by getfattr before, although I do know heal-count was returning 0 at the
time
Assuming I need to shut down vm's and put volume in maintenance from ovirt
to prevent any io. Does it need to occur for whole heal or can I
re-activate at some point to bring VM's back up?
*Dav
brick and
it only occurs on the down node then no shard healing occurs.
>
> -Krutika
>
> On Wed, Aug 31, 2016 at 4:43 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> Same issue brought up glusterd on problem node heal count still stuck at
>> 6330.
>&g
On Tue, Aug 30, 2016 at 10:02 AM, David Gossage <dgoss...@carouselchecks.com
> wrote:
> updated test server to 3.8.3
>
> Brick1: 192.168.71.10:/gluster2/brick1/1
> Brick2: 192.168.71.11:/gluster2/brick2/1
> Brick3: 192.168.71.12:/gluster2/brick3/1
> Options Reconfigured:
30, 2016 at 9:29 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Tue, Aug 30, 2016 at 8:52 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Tue, Aug 30, 2016 at 8:01 AM, Krutika Dhananjay <kdhan...@redhat.com>
>> wrote:
>&g
On Tue, Aug 30, 2016 at 8:52 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Tue, Aug 30, 2016 at 8:01 AM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
>>
>>
>> On Tue, Aug 30, 2016 at 6:20 PM, Krutika Dhananjay <kdhan...@redhat.com>
On Tue, Aug 30, 2016 at 8:52 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Tue, Aug 30, 2016 at 8:01 AM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
>>
>>
>> On Tue, Aug 30, 2016 at 6:20 PM, Krutika Dhananjay <kdhan...@redhat.com>
On Tue, Aug 30, 2016 at 7:50 AM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
>
>
> On Tue, Aug 30, 2016 at 6:07 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Tue, Aug 30, 2016 at 7:18 AM, Krutika Dhananjay <kdhan...@redhat.com>
as
well. After hours at work it had added a total of 33 shards to be healed.
I sent those logs yesterday as well though not the glustershd.
Does replace-brick command copy files in same manner? For these purposes I
am contemplating just skipping the heal route.
> -Krutika
>
> On Tue, Aug
> I tried the same test and shd crashed with SIGABRT (well, that's because I
> compiled from src with -DDEBUG).
> In any case, this error would prevent full heal from proceeding further.
> I'm debugging the crash now. Will let you know when I have the RC.
>
> -Krutika
On Mon, Aug 29, 2016 at 7:01 AM, Anuradha Talur <ata...@redhat.com> wrote:
>
>
> - Original Message -----
> > From: "David Gossage" <dgoss...@carouselchecks.com>
> > To: "Anuradha Talur" <ata...@redhat.com>
> > Cc: "g
On Mon, Aug 29, 2016 at 7:14 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Mon, Aug 29, 2016 at 5:25 AM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
>> Could you attach both client and brick logs? Meanwhile I will try these
>> steps out on my
On Mon, Aug 29, 2016 at 7:14 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Mon, Aug 29, 2016 at 5:25 AM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
>> Could you attach both client and brick logs? Meanwhile I will try these
>> steps out on my
On Mon, Aug 29, 2016 at 7:01 AM, Anuradha Talur <ata...@redhat.com> wrote:
>
>
> - Original Message -----
> > From: "David Gossage" <dgoss...@carouselchecks.com>
> > To: "Anuradha Talur" <ata...@redhat.com>
> > Cc: "g
On Mon, Aug 29, 2016 at 5:39 AM, Anuradha Talur <ata...@redhat.com> wrote:
> Response inline.
>
> - Original Message -
> > From: "Krutika Dhananjay" <kdhan...@redhat.com>
> > To: "David Gossage" <dgoss...@carouselchecks.com&
Does node I start it from determine which directory gets
crawled to determine heals?
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailma
On Sat, Aug 27, 2016 at 11:01 PM, David Gossage <dgoss...@carouselchecks.com
> wrote:
> On Sat, Aug 27, 2016 at 9:55 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Sat, Aug 27, 2016 at 5:35 PM, David Gossage <
>> dgoss...@carouselchecks.com
On Sat, Aug 27, 2016 at 9:55 PM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Sat, Aug 27, 2016 at 5:35 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Aug 27, 2016 4:37 PM, "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
On Sat, Aug 27, 2016 at 5:35 PM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Aug 27, 2016 4:37 PM, "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
> wrote:
> >
> > On 28/08/2016 6:07 AM, David Gossage wrote:
> >>
> >> 7 h
On Aug 27, 2016 4:37 PM, "Lindsay Mathieson" <lindsay.mathie...@gmail.com>
wrote:
>
> On 28/08/2016 6:07 AM, David Gossage wrote:
>>
>> 7 hours after starting full heal shards still haven't started healing,
and count from heal statistics heal-count has onl
On Sat, Aug 27, 2016 at 9:58 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Fri, Aug 26, 2016 at 8:40 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> I was in process of redoing underlying disk layout for a brick.
>> triggered full h
On Fri, Aug 26, 2016 at 8:40 PM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> I was in process of redoing underlying disk layout for a brick. triggered
> full heal. then realized I had skipped a step of applying zfs set xattr=sa
> which is kind of important running
to cancel a heal begun by gluster volume heal GLUSTER1
full? If not won't be end of world just waste of time to wait and then
have to redo after writing out a TB of data.
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Tue, Aug 23, 2016 at 10:12 AM, David Gossage <dgoss...@carouselchecks.com
> wrote:
>
> On Tue, Aug 23, 2016 at 6:42 AM, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
>
>> On 23/08/2016 6:26 PM, Niels de Vos wrote:
>>
>>> Packages fo
On Tue, Aug 23, 2016 at 6:42 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 23/08/2016 6:26 PM, Niels de Vos wrote:
>
>> Packages for 3.8.3 are available on download.gluster.org for several
>> distributions. Repositories that are managed by the distributions should
>> see the
On Wed, Aug 17, 2016 at 6:21 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> Just as another data point - today I took one server down to add a network
> card. Heal Count got up to around 1500 while I was doing that.
>
> Once the server was back up, it started healing right away, in
errors making a user worry hehe
Is their a known bug filed against that or should I maybe create one to see
if we can get that sent to an informational level maybe?
> -Krutika
>
> On Tue, Aug 16, 2016 at 1:02 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
On Sat, Aug 13, 2016 at 6:37 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> Here is reply again just in case. I got quarantine message so not sure if
> first went through or wll anytime soon. Brick logs weren't large so Ill
> just include as text files this time
>
age node. Same stale file handle issues.
I'll probably put this node in maintenance later and reboot it. Other than
that I may re-clone those 2 reccent VM's. maybe images just got corrupted
though why it would only fail on one node of 3 if image was bad not sure.
Dan
>
> On Thu, Aug 11, 2
On Mon, Aug 8, 2016 at 5:24 PM, Joe Julian <j...@julianfamily.org> wrote:
>
>
> On 08/08/2016 02:56 PM, David Gossage wrote:
>
> On Mon, Aug 8, 2016 at 4:37 PM, David Gossage <dgoss...@carouselchecks.com
> > wrote:
>
>> On Mon, Aug 8, 2016 at 4:23 PM, J
On Tue, Aug 9, 2016 at 2:18 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 9 August 2016 at 12:23, David Gossage <dgoss...@carouselchecks.com>
> wrote:
> > Since my dev is now on 3.8 and has granular enabled I'm feeling too lazy
> to
> > roll
On Mon, Aug 8, 2016 at 9:15 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 9 August 2016 at 07:23, Joe Julian wrote:
> > Just kill (-15) the brick process. That'll close the TCP connections and
> the
> > clients will just go right on functioning off the
On Mon, Aug 8, 2016 at 5:24 PM, Joe Julian <j...@julianfamily.org> wrote:
>
>
> On 08/08/2016 02:56 PM, David Gossage wrote:
>
> On Mon, Aug 8, 2016 at 4:37 PM, David Gossage <dgoss...@carouselchecks.com
> > wrote:
>
>> On Mon, Aug 8, 2016 at 4:23 PM, J
On Mon, Aug 8, 2016 at 4:37 PM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Mon, Aug 8, 2016 at 4:23 PM, Joe Julian <j...@julianfamily.org> wrote:
>
>>
>>
>> On 08/08/2016 01:39 PM, David Gossage wrote:
>>
>> So now that I have m
On Mon, Aug 8, 2016 at 4:23 PM, Joe Julian <j...@julianfamily.org> wrote:
>
>
> On 08/08/2016 01:39 PM, David Gossage wrote:
>
> So now that I have my cluster on 3.7.14 and sharded and working I am of
> course looking for what to break next.
>
> Currently each of
On Mon, Aug 8, 2016 at 4:06 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 9/08/2016 6:39 AM, David Gossage wrote:
>
>> Currently each of 3 nodes is on a 6 disk (WD Red 1TB) raidz6 (zil on
>> mirrored ssd), which I am thinking is more protection than I
will
doing a heal full after reboot or restarting glusterd take care of
everything if I recreate the expected brick path first?
Are the improvements in 3.8 for sharding significant enough I should first
look at updating to 3.8.2 when released in few days?
*David Gossage*
*Carousel Checks Inc. | System
On Sat, Aug 6, 2016 at 9:58 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 7/08/2016 12:56 PM, David Gossage wrote:
>
>> My Dev server with one VM doing almost nothing handled update to 3.8.1
>> and op-version update just fine.
>>
>
>
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
On Wed, Aug 3, 2016 at 7:57 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 3/08/2016 10:45 PM, Lindsay Mathieson wrote:
>
> On 3/08/2016 2:26 PM, Krutika Dhananjay wrote:
>
> Once I deleted old content from test volume it mounted to oVirt via
> storage add when previously it
On Tue, Aug 2, 2016 at 6:01 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 2/08/2016 5:07 PM, Kaushal M wrote:
>
>> GlusterFS-3.7.14 has been released. This is a regular minor release.
>> The release-notes are available at
>>
>>
this.
Once I have the VM installed and running will test for a few days and make
sure it doesn't have any freeze or locking issues then will roll this out
to working cluster.
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Wed, Jul 27, 2016 at 8:37 AM, David
On Tue, Jul 26, 2016 at 9:38 PM, Krutika Dhananjay <kdhan...@redhat.com>
wrote:
> Yes please, could you file a bug against glusterfs for this issue?
>
https://bugzilla.redhat.com/show_bug.cgi?id=1360785
>
>
> -Krutika
>
> On Wed, Jul 27, 2016 at 1:39
Has a bug report been filed for this issue or should l I create one with
the logs and results provided so far?
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Fri, Jul 22, 2016 at 12:53 PM, David Gossage <dgoss...@carouselchecks.com
> wrote:
>
>
On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur wrote:
> On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen
> wrote:
> > Here is a quick way how to test this:
> > GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset.
> gluster-test1 is
On Fri, Jul 22, 2016 at 8:12 AM, Vijay Bellur wrote:
> 2016-07-22 1:54 GMT-04:00 Frank Rothenstein <
> f.rothenst...@bodden-kliniken.de>:
> > The point is that even if all other backend storage filesystems do
> correctly
> > untill 3.7.11 there was no error on ZFS. Something
On Fri, Jul 22, 2016 at 8:23 AM, Samuli Heinonen <samp...@neutraali.net>
wrote:
>
> > On 21 Jul 2016, at 20:48, David Gossage <dgoss...@carouselchecks.com>
> wrote:
> >
> > Wonder if this may be related at all
> >
> > * #1347553: O_DIRECT supp
2016-07-22 2:32 GMT-05:00 Frank Rothenstein <
f.rothenst...@bodden-kliniken.de>:
> I can't tell myself, I'm using the ovirt-4.0-centos-gluster37 repo
> (from ovirt-release40). I have a second gluster-cluster as storage, I
> didn't dare to upgrade, as it simply works...not as an ovirt/vm storage.
On Thu, Jul 21, 2016 at 2:48 PM, Kaleb KEITHLEY wrote:
> On 07/21/2016 02:38 PM, Samuli Heinonen wrote:
> > Hi all,
> >
> > I’m running oVirt 3.6 and Gluster 3.7 with ZFS backend.
> > ...
> > Afaik ZFS on Linux doesn’t support aio. Has there been some changes to
> GlusterFS
On Thu, Jul 21, 2016 at 12:48 PM, David Gossage <dgoss...@carouselchecks.com
> wrote:
> On Thu, Jul 21, 2016 at 9:58 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos <nde...@redhat.com> wrote:
>>
On Thu, Jul 21, 2016 at 9:58 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos <nde...@redhat.com> wrote:
>
>> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
>> > Did a quick test thi
On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos wrote:
> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
> > Did a quick test this morning - 3.7.13 is now working with libgfapi -
> yay!
> >
> >
> > However I do have to enable write-back or write-through
On Thu, Jul 21, 2016 at 9:33 AM, Kaleb KEITHLEY <kkeit...@redhat.com> wrote:
> On 07/21/2016 10:19 AM, David Gossage wrote:
> > Has their been any release notes or bug reports about the removal of aio
> > support being intentional?
>
> Build logs of 3.7.13 on Fedor
it. I'd probably end up
creating a 2nd gluster volume and have to migrate disk by disk.
Just trying to figure out what the roadmap of this is and what resolution I
should be ultimately heading for.
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Sat, Jul 9
On Thu, Jul 21, 2016 at 5:49 AM, Ravishankar N <ravishan...@redhat.com>
wrote:
> On 07/21/2016 03:51 PM, David Gossage wrote:
>
> In the case of rolling upgrades across a cluster if I have my storage and
> vm hosts on separate machines what order would I want to update?
On Thu, Jul 21, 2016 at 5:19 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
>
>
> On Thu, Jul 21, 2016 at 4:54 AM, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
>
>> On 21/07/2016 7:10 PM, David Gossage wrote:
>>
>>> Biggest
On Thu, Jul 21, 2016 at 5:21 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> In the case of rolling upgrades across a cluster if I have my storage and
> vm hosts on separate machines what order would I want to update?
>
> Clients first or storage nodes first?
>
>
In the case of rolling upgrades across a cluster if I have my storage and
vm hosts on separate machines what order would I want to update?
Clients first or storage nodes first?
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Thu, Jul 21, 2016 at 4:54 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 21/07/2016 7:10 PM, David Gossage wrote:
>
>> Biggest issue will be changing cache mode of disks which oVirt doesn't
>> expose easily that you had mentioned would now be needed
&
On Wed, Jul 20, 2016 at 11:07 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On 21/07/2016 1:32 PM, David Gossage wrote:
>
> Guess next adventure is to troubleshoot if I still have issues with
> 3.7.12/13
>
>
> 3.7.12 has serious bugs in libgfapi, I'd sk
.
split-brain entries disappeared for .shard directory once that was done as
well.
Guess next adventure is to troubleshoot if I still have issues with
3.7.12/13
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Wed, Jul 20, 2016 at 8:49 AM, David Gossage <dg
node
or do I need to kill old heal process somehow? Or at this point would that
be too late?
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Wed, Jul 20, 2016 at 4:53 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> If I read this correctly
to have the correct data. Or will this still involve powering
off VM and reseting trusted.afr.GLUSTER1-client-# values? That method
leaves me puzzled unless I am looking at old docs as I would expect to see
3 lines of trusted.afr.GLUSTER1-client-#
*David Gossage*
*Carousel Checks Inc. | System
=0x00010001
trusted.afr.GLUSTER1-client-1=0x00010001
trusted.gfid=0xbc2b3e908efe4d75acda9c4cc9cf2800
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Wed, Jul 20, 2016 at 4:13 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
&g
Number of entries in split-brain: 0
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Wed, Jul 20, 2016 at 1:29 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
> So I have enabled shading on 3.7.11, moved all VM images off and on and
>
let it keep going?
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
the VM's themselves still see storage engine would likely keep pausing them
thinking their is a storage issue.
>
> On Thu, Jul 14, 2016 at 2:40 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>>
>> On Thu, Jul 14, 2016 at 4:07 AM, David Gossage <
>> dgo
On Thu, Jul 14, 2016 at 4:07 AM, David Gossage <dgoss...@carouselchecks.com>
wrote:
>
>
> On Thu, Jul 14, 2016 at 3:33 AM, Manikandan Selvaganesh <
> mselv...@redhat.com> wrote:
>
>> Hi David,
>>
>> Which version are you using. Though the error seems
ne you find matching.
>
https://bugzilla.redhat.com/show_bug.cgi?id=1325810
This is one I found while searching a portion of my error message.
> On Thu, Jul 14, 2016 at 1:51 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>>
>>
>>
>> On Wed,
On Wed, Jul 13, 2016 at 11:02 PM, Atin Mukherjee <amukh...@redhat.com>
wrote:
>
>
> On Thu, Jul 14, 2016 at 8:02 AM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> M, David Gossage <dgoss...@carouselchecks.com> wrote:
>>
>>>
1 - 100 of 122 matches
Mail list logo