On Wed, Dec 21, 2011 at 12:33 AM, Rugxulo <rugx...@gmail.com> wrote:
> On Tue, Dec 20, 2011 at 4:12 PM, dmccunney <dennis.mccun...@gmail.com> wrote:
>> On Tue, Dec 20, 2011 at 4:28 PM, Rugxulo <rugx...@gmail.com> wrote:
>> From my point of view, the main factor is
>> that it grabs 16MB of RAM, and I haven't a lot to begin with.
> 16 MB isn't much by today's standards. And 256 MB *should* be plenty
> for web browsing, but sadly it's not.
It's not, and there are fairly good reasons why. Advances in web
technologies bring with them demands for more power. Things like
HTML5 and CSS3 can do more than the predecessors, and require more
power to properly interpret and render.
> I know requirements could be shrunk a ton, but nobody's done it. It's
> easy to blame GCC, but who knows. It's just silly, people don't get
> it, you shouldn't need that much RAM just for code. Data? Maybe, but
> code? No.
Which requirements? Linux? Firefox?
I think a current Linux kernel is about 30MB on disk, but actual RAM
usage will be greater, as drivers will load in the kernel's address
> Last I checked, 15 bytes was the longest x86 instruction, and you
> certainly don't need thousands of those to get the job done. I dunno,
> I also blame the C libs.
It depends on how low level you feel like coding, And there are
inescapable tradeoffs between size and speed. The fastest code is
inline, where the next instruction is right after the one just
executed in the pipeline. But that means code duplication and large
executable size. The smallest code isolates everything into modules
and avoids code duplication, but adds the overhead of having to go
outside the pipeline to fetch routines. Function calls are simply
more expensive than inline code.
>> The machine came to me with XP SP2, and the donor said it was "slow
>> slow slow" XP SP2 in 256MB? No surprise. It wants 512MB minimum.
>> It shipped from Fujitsu with SP1, I think, and the SP2 was a user
> It's fine until it has to swap, which of course is a lot these days
> because GCC, Firefox, etc. dudes (apparently) do not test with "old
> legacy" machines. It's really a shame. I caught GCC using 400 MB for
> optimizations. (The "compiler" wants that much??? Why???) Firefox
> ain't too great either, but "they say" it's getting better.
The compiler wants that much because it can do better optimization by
looking at larger amounts of code, which takes RAM to store the code
and the tables it builds while it's determining how to best optimize.
Firefox is better as of FF 9. FF's architecture is based on Gecko.
Gecko understands and renders HTML, CSS, and XUL, and interprets and
actions when you select a menu choice or click an icon. The browser
is simply an instance of what Gecko is rendering. An awful lot of
is currently dealing with. FF itself is written in C++, but the
intended to identify and release unused memory, which is performed
automatically. One of the issues was some previous programming had
the unintended side-effect of getting in the way of GC, so FF could
continually use more RAM just sitting there doing nothing. It's been
fixed in FF9, and further memory optimizations are planned.
Google Chrome is lauded as using less RAM, and it does, as long as you
run it bare bones. Start adding extensions and watch what happens.
(And not only does Chrome handle each tab as a process - each
extension is too, so you can watch the effect as you add them.)
But meanwhile, all of the big boys assume you have a relatively
current machine with sufficient RAM. That doesn't bother me. The
stuff everyone is adding takes more RAM and CPU. And hardware is
cheap and getting cheaper. It simply isn't worth the time and trouble
to make sure things run on older, slower machines.
>> I went looking for things that would use less RAM, which is why 2K is
>> there. A clean re-install, even after upgrading to SP4, is slow but
>> usable for the limited stuff I need Windows for.
> Well, 2k should indeed be better than XP for you, but like I said, I
> ran XP vaguely comfortably in 128 MB. But I didn't "do" anything fancy
> (Eclipse, definitely not Firefox) or it would swap to death (virtual
> memory is dirt slow, something like 50x worse).
I think you have a different idea of vaguely comfortable than I do.
And I suspect you were running original XP, before service packs. I
suspect XP SP1 more or less ran on the Lifebook when released - it's
the only explanation I can see for the reasonably positive reviews the
box got when released. XP SP2 technically runs, but is frozen snail
>>> Firefox has its own issues, so a lot of that loading is its fault, not
>>> yours, though admittedly lack of RAM has a part.
Firefox is moving toward "There is only XUL", with a small FF loader,
and most code in libxul, but it still uses more libraries than Opera.
Lack of RAM affects runtime performance - there are perceptible delays
in performing some actions - but load time is mostly a matter of slow
HD with anemic transfer rate coupled with lots of seeks needed to link
against libs. One thing I use under Puppy is a third party static
build of SeaMonkey 1.1x to work around the library issues.
>> Opera loads about twice as fast. The difference seems to be that FF
>> is a small executable linking to a number of shared libs, while Opera
>> is a big executable linking to one small shared lib, so I get killed
>> by I/O. Opera can load mostly in one continuing read, while FF must
>> do lots of seeks. It's one reason I went ext4.
> Firefox has some weird algorithm for doing something weird that I
> don't understand. (XUL?) I know that 3.5 "fastest ever" had a huge
> regression in bootup time. I also know that Firefox has lots of issues
> due to complexity, and it's still suffering under its own weight these
If you have the hardware to run it, FF does just fine. Current IE,
Chrome, Opera and Safari are all roughly comparable in terms of what
they want to perform well.
>> Another issue with the box is that browsing is slow, even with a wired
>> connection using IE 6 in 2K. I'm using the Win drivers all around,
>> but the limitation seems to be on a machine level. To the extent that
>> I browse from it, I use Midori as a decent compromise on size vs
> Well, I don't know what you mean here, crappy drivers? Unavoidable.
> You only get what you get. Yes, it's painful.
I use the Windows network drivers in Linux, too. But I think it's HW
related, and drivers aren't the problem. Oddly, download speeds
aren't bad, but web browsing can be painfully slow. When it's slow in
IE6 under 2K as well as Chrome, FF, Midori, or Opera under Linux, I
don't think it's a browser issue.
>> I successfully used DOSEmu to run a DOS app or two under Linux, like
>> VDE, but found going native and booting DOS to begin with on the box
>> in question a better bet.
> Depends on what you want to do. A lot of stuff runs, but some doesn't.
> If you can live without it, that's the path of least resistance. But
> of course that's the cheap, dumb solution, not a real fix for
> anything. Sadly, though, sometimes that's all we've got. And sometimes
> we're stuck because nobody cares to fix things, only divert attention.
I want to have a full DOS environment, and run all DOS programs, not a
partial DOS environment running *some* DOS programs.
Otherwise I wouldn't have bothered installing FreeDOS.
>> There was a chap on the Puppy forums detailing how he got a working
>> Puppy image that would run in *16MB* of RAM. He basically had to
>> strip out everything that *could* be stripped out, then build the
>> image on a bigger machine with more RAM and transplant the drive to
>> the target. He was using it as a dedicated controller doing one
>> thing, so it worked.
> I'm sure you can run "some" Linux in low RAM, just not "most", nor
> esp. 2.6.x or higher. I know console only can be lean(ish). But adding
> X11 and Firefox etc. is just asking for trouble.
The CLI environment from the MinimalCD performed pretty well. When I
selected XFCE4 in apt-get, it brought Xorg and the other needed bits
of X11 with it. Like I said, performance was bearable. Since I have
limited RAM, I want the minimum needed to support a GUI, and a
lightweight desktop environment. When I upgraded the the
Ubuntu-before-last release, it tried to give me their Unity interface
as part of the package, but bluntly informed me I couldn't run it
after the install was finished. It wanted better graphics hardware
than the box has. Gnome of KDE would theoretically run, but I don't
even try. I'd grow old and grey waiting for operations to complete.
>> The bugs didn't for the most part bite me, but the quirks were a bit much.
> I don't know, it had some horrible Firefox bugs and (at least one old
> old version) wouldn't even run DOSEMU, so I chucked it. It was only a
> liveUSB install anyways, so it was easy to upgrade.
The Firefox bugs depend on what version of FF, not Puppy. Might have
been easier in some respects if I used the Frugal install, but I
prefer a full install. And live USB isn't an option here: the
Lifebook can't boot from a USB stick. (It can boot from a USB
> But upgrading an entire OS just because of unfixable bugs is NOT a
> solution, so people should not suggest things like that. Similarly
> upgrading or buying new hardware is not a good answer to anything,
Sometimes it *is* a good answer, as the simplest and cheapest fix for
problems. Hardware is cheap. Development is expensive. The cost
effective solution in most cases when performance is an issue is
"Throw more hardware at it.", not "Painstakingly optimize." The
exceptions are in the embedded space, where throwing more hardware at
it may not be an option.
>> Once I recalled that MS-DOS and Windows through Vista had the "logged
>> in user is administrator with all powers" model, it got a bit easier
>> to take.
> Yes, because it's hard to control ten bazillion variables like who /
> when / what across a billion files.
I could argue that Windows *should* have taken the "The default user
is a power user, not an administrator" direction much earlier, like
beginning with NT, because NTFS introduced file system support for the
notion that there *were* different users with different permissions on
what they could do.
>> But I started using *nix with AT&T System V before Linux was
>> a gleam in Linus's eye, and I've been an admin on multi-user machines
>> where I've spent time locking things down so people *couldn't* become
>> root and step on someone else's toes, so I never became comfortable
>> with the idea. I *prefer* to run as a normal user and use sudo when I
>> must do something that requires root powers.
> Keep files in separate directories. Have partial backups (of any
> kind). Don't delete rashly. Stick to safe, known working software.
> Etc. etc. I know I'm naive here, I don't grok the whole chmod / chown
> bullcrap very well, but reasonable caution is good enough for average
> use (for hobbyists like me).
In commercial settings, you must often deal with environments that
don't make that possible. A co-worker and I spent some time a few
years back trying to lock things down because someone elsewhere in the
company expressed concern that people working on different projects
could see data for projects they weren't working on, and the
SVP/Operations expressed concern. It turned out we needn't have
bothered, because that was simply the way the software we were using
worked, and in practice, it wasn't an issue. When the original person
expressing the concern understood how it worked, the response was "Oh,
never mind, then." Had he spoken directly to me I could have told him
that at side-stepped the problem, but his query was filtered though a
non-technical manager and critical details got lost.
> Multi-user is fine, if you need it and can do it and understand and
> tolerate it. Otherwise, it's overkill.
But Linux is based on the design of Unix, and is inherently
multi-user. Running as admin is not a good idea under Linux because
you can shoot yourself in the foot. Running as Admin in Windows is
not a good idea because bad guys can shoot you in the foot. Most
exploits require admin privileges to do the disrty work and bounce off
when they can't get it, which is why the default in Vista and Win7 is
to *not* run as admin.
>> [Ubuntu] was roughly the same level of Linux kernel. Stripping out the
>> Gnome stuff helped a *lot*. It wasn't *quite* as sprightly as Puppy,
>> where small size is a priority, but it was quick enough to be usable.
> It's pretty obvious that Fedora and Ubuntu are very very bleeding
> edge, trendy, too ambitious, so while good, they often accidentally
> break things. They just aren't perfect. I'm not complaining, just
> saying, it can be frustrating. :-/
The same applies to *any* Linux distro. Ubuntu is better than many,
because Canonical makes its living selling supported commercial
versions of Ubuntu. They have an interest in seeing that it's stable
and works as expected. Red Hat does the same with Red Hat Enterprise
>>> Yeah, Ubuntu has better support, esp. since it's so close to Debian
>>> anyways. This is why Lucid Puppy(s) are meant to be compatible, but
>>> even they aren't very slim anymore (or at least, not as much as I'd
>>> like in RAM footprint).
>> What sort of RAM do they take?
> With Firefox running? More than 256 MB, that's for sure. Even without
> it seems to take almost that much. I don't know, it's frustrating, and
> some of the bigger balls of wax I've never tried recompiling
> (something always fails).
Puppy concentrates in small apps in the default distribution, so take
FF out of the equation. What would you guess is the minimum hardware
needed to install a current Puppy using only what is bundled with it
and have a satisfactory experience?
>> I've *been* a Windows dude remoting into user's boxes to fix PCs.
>> What bothers me is mostly that it *used* to work and now doesn't, and
>> I don't know *why*. I did an upgrade to Ubuntu 11.10, which I think
>> may be the root problem.
> Welcome to modern computers, ten bazillion things can go wrong. That's
> what we get for wanting everything and the kitchen sink. :-/ And
> people still wonder why we use DOS ... it's not easy, but at least
> it's manageable.
I saw lots of BTSOM (Beats The Shit Outta Me!) moments back in the DOS
days, and consider manageability a function of knowledge. You want
complex? Try an IBM mainframe.
>> Yeah, saw that. Looks like it's Grub 1 specific. In particular
>> title FreeDOS
>> uuid 1abf-24ac
>> chainloader +1
>> makeactive is not supported in Grub2, boot does nothing, and the
>> syntax using uuid to locate the device is different.
> Oops, thought the uuid part was GRUB 2, guess not. Ick, partitions are
No, it predates Gub2, but got incorporated into it.
> so arcane. I can't stress this enough, I know virtualization (and
> emulation) aren't perfect, but it's 100x easier to setup, at least.
> Slow? So what, at least it'll finish in ten hours instead of you (or
> me) wasting 100 hours trying to set up natively, ugh.
Slow would not *begin* to describe it if I tried to do a VM solution
on the Lifebook. It would be a high-tech doorstop.
>>> Computers are so dumb. It shouldn't be this complicated, but
>>> admittedly, you (and I) are going "beyond" average use by
>> It's rapidly becoming less complicated. But at the cost of faster,
>> more powerful, and more expensive hardware.
>> I knew what I was in for when I first started playing with the old box.
> I never knew computers would become this confusing and insane.
I did, but that's perspective, since I begans on mainframes in the
late 70's and worked my way across and down.
> Part of the problem is the ten bazillion machine configurations, distros,
> license wars, etc. It's a complicated world.
Write once. Port to many.
Get the SDK and tools to simplify cross-platform app development. Create
new or port existing apps to sell to consumers worldwide. Explore the
Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join
Freedos-user mailing list