On 2020-04-02 07:13, R C wrote:
my 2 cnts;
I work in/with HPC, and run into that stuff all the time, and it is
unavoidable.
what stuff? It would be much easier to read old thread if related lines
were in related place.
Since HPCs run diskless, and boot in/from a network, we simply build a
complete new image, (and keep
the older ones around). We never even update an image, we simply build a
new one from scratch, since
And stop services when Saltstack, fabric, or mush could be used to
update software without much downtime ... Nonstop rebuilds are not a
solution to everything.
an update on an existing system never works and it is easier to rebuild
a repo (at least in RHEL it is).
Libraries etc, specific to applications either get relocated, or are
merged with the OS ones on a virtual file system.
Of course that is pretty much undo-able, impractical, unaffordable to do
at home, so what I do: I use different drives with
That's possible since 1990s when we could buy first removable drives for
PCs. Some were IDE, other SCSI based. Same idea as in IBM, DEC, HP, and
other mainframe computers.
separate installs (I use these now very inexpensive CRU data trays to
swap drives, and SSDs are really inexpensive now)
And indeed, let's not even get started on "rolling back" within an image.
containers; that's one of these things that don't seem to work
consistently yet. I know people (at work) that are working
Ever heard of Google? See more bellow
with it, developing in it, but I have not seen it work reliably/stable
yet. It will definitely go there, but as of yet, at least
??? It's being used in so many places that would spin your head.
https://www.sdxcentral.com/articles/news/t-mobile-to-slash-30m-in-cloud-costs-with-kubernetes/2020/04/
at scale, it is not working. (there are lots of issues that come down to
latency/timing and rdma issues and we don't
even use real time kernels etc. most of what I do is based on RHEL and
application specific RHEL 'flavors')
as I said, just my 2 cts,
Ron
I study technologies while you watch sports...
--
Rafael
_______________________________________________
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users