On Sat, 15 Dec 2018 14:08:55 +0100 Jonathan Aquilina <[email protected]>
said:

> Again to all mentioned below. What Can I do to help?

right now i need to get osuosl and you hooked up and vpn access. you then need
to set that up and check it works and check ipmi works for you... and for me to
share the creds with you you need to update your pgp key. :)

> On 15/12/2018, 13:58, "Carsten Haitzler   (The Rasterman)"
> <[email protected]> wrote:
> 
>     On Thu, 13 Dec 2018 11:36:55 -0600 Stephen Houston
> <[email protected]> said: 
>     > A few months ago Mike set up  a test instance of gitlab on a server for
>     > us... After testing and the like, I set up a slowvote on Phab that
>     > covered 4 options:
>     > 
>     > Gitlab with new infra
>     > Gitlab with current infra
>     > Phab with new infra
>     > Phab with current infra
>     > 
>     > The overwhelming support was for new infra.  As in every single vote
>     > except for one wanted new infra.  The majority also wanted gitlab, but
>     > for the sake of this email, that is irrelevant.
>     > 
>     > The arguments against new infra, having it sponsored, cloud, etc... keep
>     > being that if someone leaves the project, or the owners of the servers
>     > changes, or policies change, or etc... that we might lose access.  To me
>     > this seems like an incredibly poor argument right now especially
>     > considering that we have been experiencing this very thing and even
>     > worse with our own current infra.  The problems I have seen are that we:
>     
>     there was an offer for new corporate sponsored infra. you have no idea how
>     close things were to that infra just vanishing a few weeks ago if it had
> been used. we'd be back to begging for someone else to provide it or everyone
> having to pony up and pay for some hosting and have given up osuosl who have
> served us well (when you know the details).
>     
>     > A. Failed at maintaining the physical aspects of our server.
>     
>     i personally ordered 2 replacement drives for our server recently(ish)
> and i care. i had hoped people physically closer would handle things first,
> but that didn't happen, so i did. there are other issues which i'm sorting
> through and have been blocked by other configuration issues.
>     
>     > B. Keep having continued downtime over and over and over again.
>     
>     actually we don't. we had downtime because of a software configuration
> issue for years regarding qemu and logging. this would have happened anywhere
> with any infra if we used vm's and had the same setup.
>     
>     > C. Can never get in touch with or get a response from our server admin
>     > in any kind of remotely adequate timeframe.
>     
>     admin, server+hosting and what runs on them are different matters.
> conflating them all is a bad idea.
>     
>     this is why our infra needs multiple hands from multiple people involved
> so there is always a backup. that is what i want to happen with e5 once its
> back up. it has to be easier to manage remotely for people who are not
> full-time looking at the system and know it backwards. so system has to be
> pretty boring and "standard" as possible. it may be less secure as a result
> but that's better than not having multiple hands making light work.
>     
>     > Insanity is often said to be defined as doing the same thing over and
>     > over again expecting a different result.  It is time to have an open
>     > mind to the needs/wants of this community and make a change.
>     
>     we just had a near fatal miss above AND osuosl have done a great job over
> the years. they have more recently been locked out of helping out much (the
> ipmi thing as well as server access to OS has not been given to them like a
> working account with root access).
>     
>     the current e.org is not even running inside osuosl. it's a temporary
> server meant for "getting containers set up on our original server inside
> osuosl". that has not happened after 1.5 years. i'm, not interested i going
> into blame or what should have been done when or by who. i do want to say
> that i consider beber a friend and he has done a lot of work over the years
> and invested his time and effort and more and i totally respect that.
>     
>     now that temporary server runs somewhere only beber knows about right now
> and since it seems the host had physical problems, only he can do anything
> about that. i couldn't ssh in and do anything - no access was possible for
> me. this temporary machine is e6.
>     
>     the osuosl machine (e5). is up and working but 1 drive isn't responding.
> fixing this has been delayed because ipmi access has not worked for me since
> the day this machine was set up, nor has it worked for osuosl - they have
> been unable to access console and do the basic power up/down etc. without
> physically walking into the server room.
>     
>     i have actually figured out why it doesn't work just today... it was a
> very simple thing and never should/would have happened if there had been a
> little less paranoia :), i spent a lot of time today getting impicfg
> installed and working so i could update the user config. suffice to say that
> gentoo made this an absolute chore. i learned a lot more about how gentoo
> works and my opinions of it have gone downhill, not uphill as a result. i
> won't go into details as those are not relevant. what is relevant is that
> gentoo is not the kind of OS to use as a server hosting environment IMHO.
>     
>     so as e5 is currently not in service but it runs as a machine just with 1
> drive out of a raid array gone missing thanks to the sata
> controller/connection. 
>     i have asked osuosl to try the other drive bays with the replacement hdd i
>     ordered and other connectors etc. etc. to find one of the others that
> hopefully works. when (they hopefully) do, i can bring up the full array
> again. 
>     at this point i think it's time to just re-install e5. with an easier to
>     maintain OS as host and then set things up more simply within it. i'll
> get to this as soon as i know what is the status with this hdd/sata thing and
> that was pending ipmi access which i just fixed. we could just use 2 of the 4
> drives we have as a redundant raid rather than raid10 and we'd be fine.
>     
>     all of this is separate and not related to phab or gitlab or what things
> run on top. i am disinterested in changing any of that at all until the lower
> layers are put back into good shape. we need some kind of vm like thing to
> run as thigns like e's screenshot service relies on this. that means e5 is
> back with a simpler and easier to manage setup and we can get our existing
> services running there again. THEN we can consider what to change.
>     
>     we have e1 and e2 running for man years without issues. no vm's. no fancy
>     setups. just ubuntu that got a dist upgrade every now and again. it worked
>     well. time to go back to something more akin to that.
>     
>     > Stephen
>     > 
>     > _______________________________________________
>     > enlightenment-devel mailing list
>     > [email protected]
>     > https://lists.sourceforge.net/lists/listinfo/enlightenment-devel
>     > 
>     
>     
>     -- 
>     ------------- Codito, ergo sum - "I code, therefore I am" --------------
>     Carsten Haitzler - [email protected]
>     
>     
>     
>     _______________________________________________
>     enlightenment-devel mailing list
>     [email protected]
>     https://lists.sourceforge.net/lists/listinfo/enlightenment-devel
>     
> 
> 
> 
> 
> _______________________________________________
> enlightenment-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/enlightenment-devel


-- 
------------- Codito, ergo sum - "I code, therefore I am" --------------
Carsten Haitzler - [email protected]



_______________________________________________
enlightenment-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to