Hi Jonathan
todays hardware is so powerful that imho it might make sense to split a CEC
into more "piece". For example the IBM S822L has up to 2x12 cores, 9 PCI3 slots
( 4×16 lans & 5×8 lan ).
I think that such a server is a little bit to big just to be a single NSD
server.
Note that i use for
Thx a lot Perry
I never thought about outbound or inbound cluster access.
Wish you all the best
Hajo
--
Unix Systems Engineer
MetaModul GmbH
+49 177 4393994___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/l
I am missing since ages such a framework.
I had my simple one devoloped on the gpfs callbacks which allowed me to have a
centralized cron (HA) up to oracle also high available and ha nfs on Aix.
Hajo
Universal Inventor ___
gpfsug-discuss mailing list
Why not use a GPFS user extented attribut for that ?
In a certain way i see GPFS as a database. ^_^
Hajo
Von Samsung Mobile gesendet
Ursprüngliche Nachricht Von: Jez Tucker
Datum:2016.09.13 11:10 (GMT+01:00)
An: gpfsug-discuss@spectrumscale.org Betreff: Re:
[gpfsug-discus
Have thought about the use of a submount ?
Meaning you link your fileset to the new directory and mount that dir on the
old dir or you do not unlink at all but submount the old directory at the new
directory.
Von Samsung Mobile gesendet
Ursprüngliche Nachricht Von: "Sobey, Ric
Hi Robert,
i refered to your posting i assume ^_^
Note the following is from what I know, Since i did not had any change to work
with GPFS in the last 2 years my knowledge will be outdated.
The current GPFS tiering options depending on the DMAPI which i am not a big fan
since it had in the past
Hi
i saw in the GPFS Forum somebody mentioning IBM Spectrum Scale transparent cloud
tiering
http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html
Thus the question. Does somebody knows how that - the tiering into clould
services - is technical done and what limitations exist
...
last week, you are in for one wild ride. I would also point out that the
flapping did not stop until we resolved connectivity for *all* of the clients,
so remember that even having one single half-connected client is poisonous to
your stability.
...
In this context i think GPFS should provide
I think you talk about something like the novell ci copy inhibit attribut
https://www.novell.com/documentation/oes11/stor_filesys_lx/data/bs3fkbm.html.
With the current GPFS it is imho not possible. Might be able in case leight
weight callbacks gets introduced. Together with self defined user att
Hi Scott,
>
> It is probably not what you are looking for, but I did implement a two node
> HA solution using callbacks for SNMP.
...
I knew about and wrote even my own generic HA API for GPFS based on the very old
GPFS callbacks ( preumount )
I am trying to make IBM aware that they have
@IBM
GPFS and HA
GPFS has now the so called protocol nodes which do provide a HA environment for
NFS and SAMBA.
I assume its based on the CTDB since the CTDB is currently supporting a few
protocols already.*
What i would like to see is a generic HA interface using GPFS. It could be based
on t
11 matches
Mail list logo