Re: [Gluster-users] Brick offline after upgrade

2021-03-22 Thread David Cunningham
Hello,

We ended up restoring the backup since it was easy on a test system.

Does anyone know if you need to upgrade multiple major versions
sequentially, or can you jump to the highest version? For example, to go
from GlusterFS 5 to 8 can you upgrade to 8 directly, or must you do 6 and 7
in between?

Thanks in advance.


On Sat, 20 Mar 2021 at 09:58, David Cunningham 
wrote:

> Hi Strahil,
>
> It's as follows. Do you see anything unusual? Thanks.
>
> root@caes8:~# ls -al /var/lib/glusterd/vols/gvol0/
> total 52
> drwxr-xr-x 3 root root 4096 Mar 18 17:06 .
> drwxr-xr-x 3 root root 4096 Jul 17  2018 ..
> drwxr-xr-x 2 root root 4096 Mar 18 17:06 bricks
> -rw--- 1 root root   16 Mar 18 17:06 cksum
> -rw--- 1 root root 3848 Mar 18 16:52
> gvol0.caes8.nodirectwritedata-gluster-gvol0.vol
> -rw--- 1 root root 2270 Feb 14  2020 gvol0.gfproxyd.vol
> -rw--- 1 root root 1715 Mar 18 16:52 gvol0.tcp-fuse.vol
> -rw--- 1 root root  729 Mar 18 17:06 info
> -rw--- 1 root root0 Feb 14  2020 marker.tstamp
> -rw--- 1 root root  168 Mar 18 17:06 node_state.info
> -rw--- 1 root root   18 Mar 18 17:06 quota.cksum
> -rw--- 1 root root0 Jul 17  2018 quota.conf
> -rw--- 1 root root   13 Mar 18 17:06 snapd.info
> -rw--- 1 root root 1829 Mar 18 16:52 trusted-gvol0.tcp-fuse.vol
> -rw--- 1 root root  896 Feb 14  2020 trusted-gvol0.tcp-gfproxy-fuse.vol
>
>
> On Fri, 19 Mar 2021 at 17:51, Strahil Nikolov 
> wrote:
>
>> [2021-03-18 23:52:52.084754] E [MSGID: 101019] [xlator.c:715:xlator_init] 
>> 0-gvol0-server: Initialization of volume 'gvol0-server' failed, review your 
>> volfile again
>>
>> What is the content of :
>>
>> /var/lib/glusterd/vols/gvol0 ?
>>
>>
>> Best Regards,
>>
>> Strahil Nikolov
>>
>> On Fri, Mar 19, 2021 at 3:02, David Cunningham
>>  wrote:
>> Hello,
>>
>> We have a single node/brick GlusterFS test system which unfortunately had
>> GlusterFS upgraded from version 5 to 6 while the GlusterFS processes were
>> still running. I know this is not what the "Generic Upgrade procedure"
>> recommends.
>>
>> Following a restart the brick is not online, and we can't see any error
>> message explaining exactly why. Would anyone have an idea of where to look?
>>
>> Since the logs from the time of the upgrade and reboot are a bit lengthy
>> I've attached them in a text file.
>>
>> Thank you in advance for any advice!
>>
>> --
>> David Cunningham, Voisonics Limited
>> http://voisonics.com/
>> USA: +1 213 221 1092
>> New Zealand: +64 (0)28 2558 3782
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
>


-- 
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster usage scenarios in HPC cluster management

2021-03-22 Thread Erik Jacobson
> > The stuff I work on doesn't use containers much (unlike a different
> > system also at HPE).
> By "pods" I meant "glusterd instance", a server hosting a collection of
> bricks.

Oh ok. The term is overloaded in my world.

> > I don't have a recipe, they've just always been beefy enough for
> > gluster. Sorry I don't have a more scientific answer.
> Seems that 64GB RAM are not enough for a pod with 26 glusterfsd
> instances and no other services (except sshd for management). What do
> you mean by "beefy enough"? 128GB RAM or 1TB?

We are currently using replica-3 but may also support replica-5 in the
future.

So if you had 24 leaders like HLRS, there would be 8 replica-3 at the
bottom layer, and then distributed across. (replicated/distributed
volumes)

So we would have 24 leader nodes, each leader would have a disk serving
4 bricks (one of which is simply a lock FS for CTDB, one is sharded,
one is for logs, and one is heavily optimized for non-object expanded
tree NFS). The term "disk" is loose.

So each SU Leader (or gluster server) serving the 4 volumes, 8x3
configuration, in our world has some differences in CPU type and memory
and storage depending on order and preferences and timing (things always
move forward).

On an SU Leader, we typically do 2 RAID10 volumes with a RAID
controller including cache. However, we have moved to RAID1 in some cases with
better disks. Leaders store a lot of non-gluster stuff on "root" and
then gluster has a dedicated disk/LUN. We have been trying to improve
our helper tools to 100% wheel out a bad leader (say it melted in to the
floor) and replace it. Once we have that solid, and because our
monitoring data on the "root" drive is already redundant, we plan to
move newer servers to two NVME drives without RAID. One for gluster and
one for OS. If a leader melts in to the floor, we have a procedure to
discover a new node for that, install the base OS including
gluster/CTDB/etc, and then run a tool to re-integrate it in to the
cluster as an SU Leader node again and do the healing. Separately,
monitoring data outside of gluster will heal.

PS: I will note that I have a mini-SU-leader cluster on my desktop
(qemu/ libvirt) for development. It is a 1x3 set of SU Leaders, one head node,
and one compute node. I make an adjustment to reduce the gluster cache to fit
in the memory space. Works fine. Not real fast but good enough for development.


Specs of a leader node at a customer site:
 * 256G RAM
 * Storage: 
   - MR9361-8i controller
   - 7681GB root LUN (RAID1)
   - 15.4 TB for gluster bricks (RAID10)
   - 6 SATA SSD MZ7LH7T6HMLA-5
 * AMD EPYC 7702 64-Core Processor
   - CPU(s):  128
   - On-line CPU(s) list: 0-127
   - Thread(s) per core:  2
   - Core(s) per socket:  64
   - Socket(s):   1
   - NUMA node(s):4
 * Management Ethernet
   - Gluster and cluster management co-mingled
   - 2x40G (but 2x10G wouold be fine)




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Using Ganesha v2.8.4 with Gluster v5.11 ???

2021-03-22 Thread David Spisla
Thanks for the clarification.


David Spisla
Software Engineer
david.spi...@iternity.com
+49 761 59034852
iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg
Germany
Website

Newsletter

Support Portal
See our privacy policy if you want us to delete your personal data.
​
​iTernity GmbH. Managing Director: Ralf Steinemann.
​Registered at the District Court Freiburg: HRB-Nr. 701332.
​USt.Id DE242664311. [v01.023]
Von: Kaleb Keithley 
Gesendet: Montag, 22. März 2021 15:52
An: David Spisla 
Cc: gluster-users@gluster.org List ; Gluster Devel 

Betreff: Re: [Gluster-users] Using Ganesha v2.8.4 with Gluster v5.11 ???

I was wrong:  nfs-ganesha-2.8's fsal_gluster calls glfs_ftruncate() and 
glfs_fsync(), which appeared in glusterfs-6.0.

Sorry for any confusion.

--

Kaleb




On Mon, Mar 22, 2021 at 10:07 AM Kaleb Keithley 
mailto:kkeit...@redhat.com>> wrote:

GFAPI_6.0 is a reference to a set of versioned symbols in gluster's libgfapi.

As the version implies, you need at least glusterfs-6.0 to run 
nfs-ganesha-2.8.x.

Although it's not clear — without further investigation — why the rpm has 
derived that dependency. I'm not seeing that the gluster FSAL in ganesha-2.8.x 
calls any of the GFAPI_6.0 apis. Or any of the later GFAPI_6.x apis.

It seems to me like nfs-ganesha-2.8.x could be compiled with glusterfs-5 and 
would work fine.

--

Kaleb

On Mon, Mar 22, 2021 at 8:15 AM David Spisla 
mailto:spisl...@gmail.com>> wrote:
Dear Gluster Community and Devels,
at the moment we are using Ganesha 2.7.6 with Glusterv5.11

Now we want to update ganesha from 2.7.6 to 2.8.4 . I just tried to update 
ganesha on a 2-node SLES15SP1 cluster with the above mentioned versions. I got 
the packages from here:
https://download.opensuse.org/repositories/home:/nfs-ganesha:/SLES15SP1-nfs-ganesha-2.8/SLE_15_SP1/x86_64/

But I got the following dependency error:
fs-davids-c3-n1:~ # zypper install libntirpc1_8-1.8.1-2.2.x86_64.rpm 
nfs-ganesha-2.8.4-5.2.x86_64.rpm nfs-ganesha-gluster-2.8.4-5.2.x86_64.rpm 
nfs-ganesha-vfs-2.8.4-5.2.x86_64.rpm
Loading repository data...
Reading installed packages...
Resolving package dependencies...

Problem: nothing provides libgfapi.so.0(GFAPI_6.0)(64bit) needed by 
nfs-ganesha-gluster-2.8.4-5.2.x86_64
 Solution 1: do not install nfs-ganesha-gluster-2.8.4-5.2.x86_64
 Solution 2: break nfs-ganesha-gluster-2.8.4-5.2.x86_64 by ignoring some of its 
dependencies

Choose from above solutions by number or cancel [1/2/c/d/?] (c): c

Does anybody of you know to which Gluster version GFAPI_6.0 refers?
Is it posible at all to run ganesha 2.8.4 with gluster 5.11?
Regards
David Spisla




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Using Ganesha v2.8.4 with Gluster v5.11 ???

2021-03-22 Thread Kaleb Keithley
I was wrong:  nfs-ganesha-2.8's fsal_gluster calls glfs_ftruncate() and
glfs_fsync(), which appeared in glusterfs-6.0.

Sorry for any confusion.

--

Kaleb




On Mon, Mar 22, 2021 at 10:07 AM Kaleb Keithley  wrote:

>
> GFAPI_6.0 is a reference to a set of versioned symbols in
> gluster's libgfapi.
>
> As the version implies, you need at least glusterfs-6.0 to run
> nfs-ganesha-2.8.x.
>
> Although it's not clear — without further investigation — why the rpm has
> derived that dependency. I'm not seeing that the gluster FSAL in
> ganesha-2.8.x calls any of the GFAPI_6.0 apis. Or any of the later
> GFAPI_6.x apis.
>
> It seems to me like nfs-ganesha-2.8.x could be compiled with glusterfs-5
> and would work fine.
>
> --
>
> Kaleb
>
> On Mon, Mar 22, 2021 at 8:15 AM David Spisla  wrote:
>
>> Dear Gluster Community and Devels,
>> at the moment we are using Ganesha 2.7.6 with Glusterv5.11
>>
>> Now we want to update ganesha from 2.7.6 to 2.8.4 . I just tried to
>> update ganesha on a 2-node SLES15SP1 cluster with the above mentioned
>> versions. I got the packages from here:
>>
>> https://download.opensuse.org/repositories/home:/nfs-ganesha:/SLES15SP1-nfs-ganesha-2.8/SLE_15_SP1/x86_64/
>>
>> But I got the following dependency error:
>>
>>> fs-davids-c3-n1:~ # zypper install libntirpc1_8-1.8.1-2.2.x86_64.rpm
>>> nfs-ganesha-2.8.4-5.2.x86_64.rpm nfs-ganesha-gluster-2.8.4-5.2.x86_64.rpm
>>> nfs-ganesha-vfs-2.8.4-5.2.x86_64.rpm
>>> Loading repository data...
>>> Reading installed packages...
>>> Resolving package dependencies...
>>>
>>> Problem: nothing provides libgfapi.so.0(GFAPI_6.0)(64bit) needed by
>>> nfs-ganesha-gluster-2.8.4-5.2.x86_64
>>>  Solution 1: do not install nfs-ganesha-gluster-2.8.4-5.2.x86_64
>>>  Solution 2: break nfs-ganesha-gluster-2.8.4-5.2.x86_64 by ignoring some
>>> of its dependencies
>>>
>>> Choose from above solutions by number or cancel [1/2/c/d/?] (c): c
>>>
>>
>> Does anybody of you know to which Gluster version GFAPI_6.0 refers?
>> Is it posible at all to run ganesha 2.8.4 with gluster 5.11?
>> Regards
>> David Spisla
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Using Ganesha v2.8.4 with Gluster v5.11 ???

2021-03-22 Thread David Spisla
Thanks for the answer. Its a pitty that ganesha 2.8.4 doesn’t runs out of the 
box with gluster 5.11


David Spisla
Software Engineer
david.spi...@iternity.com
+49 761 59034852
iTernity GmbH
Heinrich-von-Stephan-Str. 21
79100 Freiburg
Germany
Website

Newsletter

Support Portal
See our privacy policy if you want us to delete your personal data.
​
​iTernity GmbH. Managing Director: Ralf Steinemann.
​Registered at the District Court Freiburg: HRB-Nr. 701332.
​USt.Id DE242664311. [v01.023]
Von: Kaleb Keithley 
Gesendet: Montag, 22. März 2021 15:08
An: David Spisla 
Cc: gluster-users@gluster.org List ; Gluster Devel 

Betreff: Re: [Gluster-users] Using Ganesha v2.8.4 with Gluster v5.11 ???


GFAPI_6.0 is a reference to a set of versioned symbols in gluster's libgfapi.

As the version implies, you need at least glusterfs-6.0 to run 
nfs-ganesha-2.8.x.

Although it's not clear — without further investigation — why the rpm has 
derived that dependency. I'm not seeing that the gluster FSAL in ganesha-2.8.x 
calls any of the GFAPI_6.0 apis. Or any of the later GFAPI_6.x apis.

It seems to me like nfs-ganesha-2.8.x could be compiled with glusterfs-5 and 
would work fine.

--

Kaleb

On Mon, Mar 22, 2021 at 8:15 AM David Spisla 
mailto:spisl...@gmail.com>> wrote:
Dear Gluster Community and Devels,
at the moment we are using Ganesha 2.7.6 with Glusterv5.11

Now we want to update ganesha from 2.7.6 to 2.8.4 . I just tried to update 
ganesha on a 2-node SLES15SP1 cluster with the above mentioned versions. I got 
the packages from here:
https://download.opensuse.org/repositories/home:/nfs-ganesha:/SLES15SP1-nfs-ganesha-2.8/SLE_15_SP1/x86_64/

But I got the following dependency error:
fs-davids-c3-n1:~ # zypper install libntirpc1_8-1.8.1-2.2.x86_64.rpm 
nfs-ganesha-2.8.4-5.2.x86_64.rpm nfs-ganesha-gluster-2.8.4-5.2.x86_64.rpm 
nfs-ganesha-vfs-2.8.4-5.2.x86_64.rpm
Loading repository data...
Reading installed packages...
Resolving package dependencies...

Problem: nothing provides libgfapi.so.0(GFAPI_6.0)(64bit) needed by 
nfs-ganesha-gluster-2.8.4-5.2.x86_64
 Solution 1: do not install nfs-ganesha-gluster-2.8.4-5.2.x86_64
 Solution 2: break nfs-ganesha-gluster-2.8.4-5.2.x86_64 by ignoring some of its 
dependencies

Choose from above solutions by number or cancel [1/2/c/d/?] (c): c

Does anybody of you know to which Gluster version GFAPI_6.0 refers?
Is it posible at all to run ganesha 2.8.4 with gluster 5.11?
Regards
David Spisla




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Using Ganesha v2.8.4 with Gluster v5.11 ???

2021-03-22 Thread Kaleb Keithley
GFAPI_6.0 is a reference to a set of versioned symbols in
gluster's libgfapi.

As the version implies, you need at least glusterfs-6.0 to run
nfs-ganesha-2.8.x.

Although it's not clear — without further investigation — why the rpm has
derived that dependency. I'm not seeing that the gluster FSAL in
ganesha-2.8.x calls any of the GFAPI_6.0 apis. Or any of the later
GFAPI_6.x apis.

It seems to me like nfs-ganesha-2.8.x could be compiled with glusterfs-5
and would work fine.

--

Kaleb

On Mon, Mar 22, 2021 at 8:15 AM David Spisla  wrote:

> Dear Gluster Community and Devels,
> at the moment we are using Ganesha 2.7.6 with Glusterv5.11
>
> Now we want to update ganesha from 2.7.6 to 2.8.4 . I just tried to update
> ganesha on a 2-node SLES15SP1 cluster with the above mentioned versions. I
> got the packages from here:
>
> https://download.opensuse.org/repositories/home:/nfs-ganesha:/SLES15SP1-nfs-ganesha-2.8/SLE_15_SP1/x86_64/
>
> But I got the following dependency error:
>
>> fs-davids-c3-n1:~ # zypper install libntirpc1_8-1.8.1-2.2.x86_64.rpm
>> nfs-ganesha-2.8.4-5.2.x86_64.rpm nfs-ganesha-gluster-2.8.4-5.2.x86_64.rpm
>> nfs-ganesha-vfs-2.8.4-5.2.x86_64.rpm
>> Loading repository data...
>> Reading installed packages...
>> Resolving package dependencies...
>>
>> Problem: nothing provides libgfapi.so.0(GFAPI_6.0)(64bit) needed by
>> nfs-ganesha-gluster-2.8.4-5.2.x86_64
>>  Solution 1: do not install nfs-ganesha-gluster-2.8.4-5.2.x86_64
>>  Solution 2: break nfs-ganesha-gluster-2.8.4-5.2.x86_64 by ignoring some
>> of its dependencies
>>
>> Choose from above solutions by number or cancel [1/2/c/d/?] (c): c
>>
>
> Does anybody of you know to which Gluster version GFAPI_6.0 refers?
> Is it posible at all to run ganesha 2.8.4 with gluster 5.11?
> Regards
> David Spisla
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster usage scenarios in HPC cluster management

2021-03-22 Thread Gionatan Danti

Il 2021-03-19 16:03 Erik Jacobson ha scritto:

A while back I was asked to make a blog or something similar to discuss
the use cases the team I work on (HPCM cluster management) at HPE.

If you are not interested in reading about what I'm up to, just delete
this and move on.

I really don't have a public blogging mechanism so I'll just describe
what we're up to here. Some of this was posted in some form in the 
past.

Since this contains the raw materials, I could make a wiki-ized version
if there were a public place to put it.


Very interesting post, thank you so much for sharing!

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster usage scenarios in HPC cluster management

2021-03-22 Thread Diego Zuccato
Il 22/03/21 14:45, Erik Jacobson ha scritto:

> The stuff I work on doesn't use containers much (unlike a different
> system also at HPE).
By "pods" I meant "glusterd instance", a server hosting a collection of
bricks.

> I don't have a recipe, they've just always been beefy enough for
> gluster. Sorry I don't have a more scientific answer.
Seems that 64GB RAM are not enough for a pod with 26 glusterfsd
instances and no other services (except sshd for management). What do
you mean by "beefy enough"? 128GB RAM or 1TB?

-- 
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster usage scenarios in HPC cluster management

2021-03-22 Thread Erik Jacobson
The stuff I work on doesn't use containers much (unlike a different
system also at HPE).

Leaders are over-sized but the sizing largely is associated with all the
other stuff leaders do, not just for gluster. That said, my gluster
settings for the expanded nfs tree (as opposed to squashfs image files on
nfs) method use heavy caching; I believe the max was 8G.

I don't have a recipe, they've just always been beefy enough for
gluster. Sorry I don't have a more scientific answer.

On Mon, Mar 22, 2021 at 02:24:17PM +0100, Diego Zuccato wrote:
> Il 19/03/2021 16:03, Erik Jacobson ha scritto:
> 
> > A while back I was asked to make a blog or something similar to discuss
> > the use cases the team I work on (HPCM cluster management) at HPE.
> Tks for the article.
> 
> I just miss a bit of information: how are you sizing CPU/RAM for pods?
> 
> -- 
> Diego Zuccato
> DIFA - Dip. di Fisica e Astronomia
> Servizi Informatici
> Alma Mater Studiorum - Università di Bologna
> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> tel.: +39 051 20 95786
> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster usage scenarios in HPC cluster management

2021-03-22 Thread Diego Zuccato

Il 19/03/2021 16:03, Erik Jacobson ha scritto:


A while back I was asked to make a blog or something similar to discuss
the use cases the team I work on (HPCM cluster management) at HPE.

Tks for the article.

I just miss a bit of information: how are you sizing CPU/RAM for pods?

--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Using Ganesha v2.8.4 with Gluster v5.11 ???

2021-03-22 Thread David Spisla
Dear Gluster Community and Devels,
at the moment we are using Ganesha 2.7.6 with Glusterv5.11

Now we want to update ganesha from 2.7.6 to 2.8.4 . I just tried to update
ganesha on a 2-node SLES15SP1 cluster with the above mentioned versions. I
got the packages from here:
https://download.opensuse.org/repositories/home:/nfs-ganesha:/SLES15SP1-nfs-ganesha-2.8/SLE_15_SP1/x86_64/

But I got the following dependency error:

> fs-davids-c3-n1:~ # zypper install libntirpc1_8-1.8.1-2.2.x86_64.rpm
> nfs-ganesha-2.8.4-5.2.x86_64.rpm nfs-ganesha-gluster-2.8.4-5.2.x86_64.rpm
> nfs-ganesha-vfs-2.8.4-5.2.x86_64.rpm
> Loading repository data...
> Reading installed packages...
> Resolving package dependencies...
>
> Problem: nothing provides libgfapi.so.0(GFAPI_6.0)(64bit) needed by
> nfs-ganesha-gluster-2.8.4-5.2.x86_64
>  Solution 1: do not install nfs-ganesha-gluster-2.8.4-5.2.x86_64
>  Solution 2: break nfs-ganesha-gluster-2.8.4-5.2.x86_64 by ignoring some
> of its dependencies
>
> Choose from above solutions by number or cancel [1/2/c/d/?] (c): c
>

Does anybody of you know to which Gluster version GFAPI_6.0 refers?
Is it posible at all to run ganesha 2.8.4 with gluster 5.11?
Regards
David Spisla




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Volume not healing

2021-03-22 Thread Diego Zuccato
Il 19/03/21 18:06, Strahil Nikolov ha scritto:

> Are you running it against the fuse mountpoint ?
Yup.

> You are not supposed to see 'no such file or directory' ... Maybe
> something more serious is going on.
Between that and the duplicated files,that's for sure. But I don't know
where to look to at least diangose (if not fix) this :( As I said,
probably part of the issue is due to the multiple failures for OOM and
the multiple tries to remove a brick.

I'm currently emptying the volume then I'll recreate it from scratch,
hoping for the best.

-- 
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Volume not healing

2021-03-22 Thread Diego Zuccato
Il 20/03/21 15:21, Zenon Panoussis ha scritto:

> When you have 0 files that need healing,
>   gluster volume heal BigVol granular-entry-heal enable
> I have tested with and without granular and, empirically,
> without any hard statistics, I find granular considerably
> faster.
Tks for the hint, but it's already set. I usually do it as soon as I
create the volume :) I don't understand why it's not the default :)

-- 
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users