-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 24/03/12 20:14, Jay A. Kreibich wrote:
> Not to nit-pick, but an OS doesn't make it, or not make it, difficult 
> to compile something.  That's usually due to crappy tools and poor 
> support.

Yes, but the tools primarily come from the OS vendor.  After all they have
to be able to compile and debug the code making up the operating system
and base applications.

> Then again, the last time I compiled SQLite on Windows, I was using a 
> non-Visual Studio, command line compiler.

Compiling 32 bit isn't a big deal and numerous compilers do so.  MinGW is
the most popular non-commercial compiler and is fine for 32 bit although
there are some issues I and others ignore to do with CRT support.  It has
also tended to have stable releases in the distant past and numerous
unstable updates.  It wasn't that long ago that GCC 4 become stable!
However to my knowledge there is currently no stable 64 bit Windows support.

Microsoft has various free "Express" products as well as varying priced
forms of Visual Studio.  One of the differentiators is that the free
products only produce 32 bit binaries.

Separately they do have things like a Windows 7 SDK which also happens to
include compiler support.  You can whack together the Express C/C++ and
SDK and actually produce 64 bit binaries.  It is what I do and is also
command line only.  But it is not a pleasant experience.

> True, but the OS doesn't typically cache filesystems on a ramdisk.

Huh?  There was no claim of ramdisks being used by the operating system as
a filesystem cache.  The claim was that memory used by a ramdisk is not
available for use as a cache.

> Caching slows things down and wastes memory.

Really?  Caches as a rule do not slow anything down.  For sequentially
accessed files they store read ahead data which improves performance and
for randomly accessed files they store previously read data in case it
gets accessed again.  Both of these things improve performance(*).

Modern operating systems dynamically alter system memory allocation based
on workload so that memory is used in whatever way is most effective.  I'd
hardly call that a waste!

(*) Nit pickers corner:  If your random access patterns exceed the cache
size and you do not reread previously read data then the cache could end
up with few or no hits.  In any event read data had to be put in some
memory somewhere so that isn't inherently a waste.  The only waste would
be CPU managing a cache that gets no hits.  But since CPU is so many
orders of magnitude faster than I/O it doesn't really matter.

Roger
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk9upNsACgkQmOOfHg372QTPRgCbBzUBbH2nTI2izubKKAO+0/4U
A/oAn1We5oteTDhhF+wahFPRRxTpfI8S
=tTvm
-----END PGP SIGNATURE-----
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to