on genuine RHEL? ... It
>> seems like I need to install
>> centos-release-gluster9-1.0-1.el8.noarch.rpm,
>> centos-release-storage-common-2-2.el8.noarch.rpm, and maybe
>> centos-release?
>>
>>
>
> Péter Károly JUHÁSZ wrote:
> >I don't know what is
n Mon, Jul 18, 2022 at 6:34 PM Thomas Cameron <
>> thomas.came...@camerontech.com> wrote:
>>
>>> On 7/18/22 09:18, Péter Károly JUHÁSZ wrote:
>>> > The best would be officially pre built rpms for RHEL.
>>>
>>> Where are there official Red Hat Gl
;> On Mon, Jul 18, 2022 at 6:34 PM Thomas Cameron <
>>> thomas.came...@camerontech.com> wrote:
>>>
>>>> On 7/18/22 09:18, Péter Károly JUHÁSZ wrote:
>>>> > The best would be officially pre built rpms for RHEL.
>>>>
>>>> Where are th
Hi,
I don't know what is the correct way but what I did on my RHEL7 (I assume 8
and 9 is more or less the same):
* Added this repo
http://mirror.centos.org/centos/7/storage/x86_64/gluster-9/
* Then yum install glusterfs-server
It works for me.
Regards,
Stone
Thomas Cameron 于
Hi Karl,
I think you should check out Syncthing too.
Karl Kleinpaste 于 2022年8月24日周三 20:21写道:
> Apologies for the previous incomplete message. It seems an unintended
> Alt-Ret told Thunderbird to send prematurely. So this time I'm composing
> outside Tbird so that it doesn't get that
fsd processes on all nodes to a value of -10.
> The problem just occured, so it seems nicing the processes didn't help.
>
> Am 18.08.2022 09:54 schrieb Péter Károly JUHÁSZ:
> > What if you renice the gluster processes to some negative value?
> >
> > 于 2022年8月18日周四 09:45写道:
You can also add the mount option: backupvolfile-server to let the client
know the other nodes.
Matthew J Black 于 2022年8月31日周三 17:21写道:
> Ah, it all now falls into place: I was unaware that the client receives
> that file upon initial contact with the cluster, and thus has that
> information at
This always helped for me in this kind of situations:
http://docs.gluster.org/Troubleshooting/resolving-splitbrain/
Joe Julian 于 2022年8月12日周五 18:33写道:
> It could work, but I never imagined, back then, that *directories* could
> get in split-brain.
>
> The most likely reason for that split is
What if you renice the gluster processes to some negative value?
于 2022年8月18日周四 09:45写道:
> Hi folks,
>
> i am running multiple GlusterFS servers in multiple datacenters. Every
> datacenter is basically the same setup: 3x storage nodes, 3x kvm
> hypervisors (oVirt) and 2x HPE switches which are
rors are highly problematic.
>
>
> Kind Regards,
> Jaco Kroon
>
>
> n 2022/12/14 14:04, Péter Károly JUHÁSZ wrote:
>
> When we used glusterfs for websites, we copied the web dir from gluster to
> local on frontend boots, then served it from there.
>
> Ja
When we used glusterfs for websites, we copied the web dir from gluster to
local on frontend boots, then served it from there.
Jaco Kroon 于 2022年12月14日周三 12:49写道:
> Hi All,
>
> We've got a glusterfs cluster that houses some php web sites.
>
> This is generally considered a bad idea and we can
11 matches
Mail list logo