Tom Wijsman posted on Tue, 25 Jun 2013 01:18:07 +0200 as excerpted:

> On Mon, 24 Jun 2013 15:27:19 +0000 (UTC)
> Duncan <1i5t5.dun...@cox.net> wrote:
> 
>> Throwing hardware at the problem is usable now.
> 
> If you have the money; yes, that's an option.
> 
> Though I think a lot of people see Linux as something you don't need to
> throw a lot of money at; it should run on low end systems, and that's
> kind of the type of users we shouldn't just neglect going forward.

Well, let's be honest.  Anyone building packages on gentoo isn't likely 
to be doing it on a truly low-end system.  For general linux, yes, 
agreed, but that's what puppy linux and etc are for.  True there's the 
masochistic types that build natively on embedded or a decade plus old 
(and mid-level or lower then!) systems, but most folks with that sort of 
system either have a reasonable build server to build it on, or use a pre-
built binary distro.  And the masochistic types... well, if it takes an 
hour to get the prompt in an emerge --ask and another day or two to 
actually complete, that's simply more masochism for them to revel in. =:^P

Tho you /do/ have a point.

OTOH, some of us used to do MS or Apple or whatever and split our money 
between hardware and software.  Now we pay less for the software, but 
that doesn't mean we /spend/ significantly less on the machines; now it's 
mostly/all hardware.

I've often wondered why the hardware folks aren't all over Linux, given 
the more money available for hardware it can mean, and certainly /does/ 
mean here.

>> Truth is, I used to run a plain make -j (no number and no -l at all) on
>> my kernel builds, just to watch the system stress and then so elegantly
>> recover.  It's an amazing thing to watch, this Linux kernel thing and
>> how it deals with cpu oversaturation.  =:^)
> 
> If you have the memory to pull it off, which involves money again.

What was interesting was doing it without the (real) memory -- letting it 
go into swap and just queue up hundreds and hundreds of jobs as the make 
continued to generate more and more of them, faster than they could even 
fully initialize, particularly since they were packing into swap before 
they even had that chance.

And then with 500-600 jobs or more (custom kernel build, not all-yes/all-
mod config, or it'd likely have been 1200...) stacked up and gigs into 
swap, watch the system finally start to slowly unwind the tangle.  
Obviously the system wasn't usable for anything else during the worst of 
it, but it still rather fascinates me that the kernel scheduling and code 
quality in general is such that it can successfully do that and unwind it 
all, without crashing or whatever.  And the kernel build is one of the 
few projects that's /that/ incredibly parallel, without requiring /too/ 
much memory per individual job, to do it in the first place.

Actually, that's probably the flip side of my getting more conservative.  
The reason I /can/ get more conservative now is that I've enough cores 
and memory that it's actually reasonably practical to do so.  When you're 
always dumping cache and/or swapping anyway, no big deal to do so a bit 
more.  When you have a system big enough to avoid that while still 
getting reasonably large chunks of real work done, and you're no longer 
used to the compromise of /having/ to dump cache, suddenly you're a lot 
more sensitive to doing so at all!

>> Needlessly oversaturating the CPU (and RAM) only slows things down and
>> forces cache dump and swappage.
> 
> The trick is to set it a bit before the point of oversaturating; low
> enough so most packages don't oversaturize, it could be put more
> precisely for every package but that time is better spent elsewhere

Indeed. =:^)

> Not everyone is a sysadmin with a server; I'm just a student running a
> laptop bought some years ago, and I'm kind of the type that doesn't
> replace it while it still works fine otherwise. Maybe when I graduate...

Actually, I use "sysadmin" in the literal sense, the person taking the 
practical responsibility for deciding what goes on a system, when/if/what 
to upgrade (or not), with particular emphasis on RESPONSIBILITY, both for 
security and both keeping the system running and getting it back running 
again when it breaks.  Nothing in that says it has to be commercial, or 
part of some huge farm of systems.  For me, the person taking 
responsibility (or failing to take it) for updating that third-generation 
hand-me-down castoff system is as much of a sysadmin for that system, as 
the guy/gal with 100 or 1000 systems (s)he's responsible for.

My perspective has always been that if all those folks running virus 
infested junk out there actually took the sysadmin responsibility for the 
systems they're running seriously, the virus/malware issue would cease to 
be an issue at all.

Meanwhile, I'll admit my last system was rather better than average when 
I first set it up (dual socket original 3-digit Opteron, that whole 
spending all the money I used to spend on software, on hardware, now, 
thing, my first 64-bit machine and my first and likely last real dual-
CPU... socket); in fact, compared to peers of its time it may well be the 
best system I'll ever own, but that thing lasted me 8+ years.  My goal 
was a decade but I didn't make it as the caps on the mobo were bulging 
and finally popping by the time I got rid of it.  (The last month or so I 
ran it, last summer here in Phoenix, it'd run if I kept it cold enough, 
basically 15C or lower, so I was dressing up in a winter jacket with long 
underwear and a knit hat on, with the AC running to keep it cold enough 
to run the computer inside, while outside it was 40C+!)

But OTOH, that was originally a $400 mobo alone, for quite some time 
worth probably 2-3 grand total as I kept upgrading bits and pieces of it 
as I had the money.  But FTR, I /am/ quite happy with the 6-core 
Bulldozer-1 that replaced it, when I finally really had no other choice.  
And the replacement was *MUCH* cheaper!

But anyway, yeah, I do know a bit about running old hardware, myself, and 
know how to make those dollars strreeettcchh myself. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


Reply via email to