Linux-Development-Sys Digest #992, Volume #6 Mon, 26 Jul 99 03:14:01 EDT
Contents:
Re: when will Linux support > 2GB file size??? ("Colin R. Day")
Re: Control speaker without /dev/mixer (Frank v Waveren)
Re: when will Linux support > 2GB file size??? (Graffiti)
Re: Bug of GCC (Martin Maney)
Re: Bug of GCC (Martin Maney)
Re: when will Linux support > 2GB file size??? ("Chad Mulligan")
UP and SMP (difference in atomic operation and spinlock function?) ("robert_c")
Re: Bug of GCC ("Cute Panda")
Re: HP CD-RW Supported by RH 6.0? (Stephen Lee - Post replies please)
Re: when will Linux support > 2GB file size??? (Bloody Viking)
Re: dual Celeron MB blows up constantly! (Rich)
generating linux GCC as ARM cross compiler ("Lim, Sung-taek")
Strange compilation problems under Redhat 6.0 (Zewei Chen)
Re: USB, IrDA, PnP in Linux (Helmut Artmeier)
Re: Strange compilation problems under Redhat 6.0 (Mumit Khan)
Re: Why not C++ (David Schwartz)
Re: Problem with glibc compilation (Primary libc)! (Ilya Bassine)
Re: NT to Linux port questions (David Schwartz)
----------------------------------------------------------------------------
From: "Colin R. Day" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: Sun, 25 Jul 1999 22:15:03 +0000
Chad Mulligan wrote:
> >
> >This may have changed, but the last time I looked, Read-Write support for
> >NTFS was termed "experimental" and "dangerous". Or has this been changed?
> >
> >
> Slackware 4 doesn't call it that. And thus far it appears to work OK, They've
> even included security work arounds for ACL's. Don't have enough disk to attempt
> this sort of test though.
Oops, it just said "dangerous". This was on the 2.2.5 kernel. Which kernel
are you using?
>
>
> >
> >--
> >Colin R. Day [EMAIL PROTECTED] alt.atheist #1500
> >
> >
> >
--
Colin R. Day [EMAIL PROTECTED] alt.atheist #1500
------------------------------
From: [EMAIL PROTECTED] (Frank v Waveren)
Subject: Re: Control speaker without /dev/mixer
Date: Sun, 25 Jul 1999 23:26:18 GMT
/dev/mixer does not work for the pc speaker anyway.
Under X you can set speaker volume with Xset I believe. Many window
managers allow you to control this in their settings/control-centers/etc.
I dunno for console.
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Arun Sharma) writes:
> Hi,
>
> I have a machine which has no sound card and hence no /dev/mixer. But
> I want to be able to control the volume on my PC speaker. Is there
> any Linux utility which lets me do this ?
>
> Thanks!
>
> -Arun
>
--
Frank v Waveren
[EMAIL PROTECTED]
ICQ# 10074100
------------------------------
From: Graffiti <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 25 Jul 1999 15:50:17 -0700
In article <[EMAIL PROTECTED]>,
Thomas Boroske <[EMAIL PROTECTED]> wrote:
[snip]
>It�s clear to everyone that you can�t mmap the whole of a 5 GB file on
>a system where the logical adress range only spans 4 GB. This is not
>what the discussion is about. The discussion is about whether the
Okay, I admit, I wasn't too clear on that point. What I'm saying is that
if you allow 64-bit files, with the current MM system, you can't use
mmap() at all on 64-bit files. Period. Whether you map only 1 byte
or 100000000000*10e^1235142365.5 bytes.
If you want mmap to work, you can do ugly hacks to:
1) allow mmap() on only the part of the file that can be represented
with the current 32-bit file sizes.
2) Forget about binary compatability and let the user handle all the
breakage, and provide stuff like mmap64() (or whatever the LFS has
come up with for 64-bit mmap(). If any).
This means you can't mmap() the first 100k of a 100G file into memory.
I'm not talking about tossing the whole 100G into memory.
>What�s different under Linux is that it would also require quite a
>lot of work in the kernel/fs area - and what�s probably the worst,
>this work would replace a part of the kernel that is usually
>considered elegant and aesthetically pleasing.
Well, I don't think this is *different* for Linux. It's just that
since the source is out there, we can *see* the horrid issues it
brings up. I'm sure most other vendors didn't have 64-bit
clean/capable routines running on 32-bit hardware. :-)
>No wonder Linus says "use an Alpha" then ...
>
>I think, however, that this is not good enough. Limiting the filesize
>your system can deal with to the amount your processor can adress is
>not clever.
Sure it is. It simplifies many things and allows you to write some
pretty fast code. Speed can be elegant. :-)
>> Of course, this means you break binary compatability all over the
>> place and have to emulate things like mmap() with code that will
>> slow it down by quite a bit (speed is one reason mmap() is used over
>> lseek()/read()).
>
>Hmmm, you can�t really mmap large files on a 32 bit system, you can
>also not emulate it. Of course, you can map parts of it and re-map
>(on the application level) but then you can as well read a part in a
>buffer and work with that.
Sure you can. You make mmap() a library call instead of a system call.
Then when someone dereferences a pointer to a mmap()'ed region, you
have it go through a run-time address translation routine, ala some
of C++'s run-time support. This means you will have to munmap32() part
of the file, mmap32() in the requested chunk, etc. Not pretty.
And this will probably imply re-compiling all the static binaries, at
the very least, to use the new library routines.
>Yes, this sucks. And is a good reason to why the present Linux
>behaviour shouldn�t be changed (you would end up with two classes of
>files effectively).
Yup.
-- DN
------------------------------
From: Martin Maney <[EMAIL PROTECTED]>
Subject: Re: Bug of GCC
Date: 25 Jul 1999 22:25:05 GMT
Christopher Browne <[EMAIL PROTECTED]> wrote:
> Furthermore, it is not simply a pedantic question to ask which
> standard you're referring to. I think that ISO and ANSI have both
> declared standards; there have been revisions to those standards;
They are the same standard, however. Literally - ANSI formally adopted the
ISO version (which was substantially identical to the original ANSI standard
modulo some reorganization - more reformatting IIRC) as the official ANSI
standard.
[followup to comp.lang.c or other appropriate venue, please - this should
never have been in colds!]
------------------------------
From: Martin Maney <[EMAIL PROTECTED]>
Subject: Re: Bug of GCC
Date: 25 Jul 1999 22:18:34 GMT
Scott Lanning <[EMAIL PROTECTED]> wrote:
> Anyway, I don't think the C standard has any business specifying
> preprocessor behavior. But who cares what I think...? <sigh>
So sorry, Scott, you're about two decades late to the arguing... and three
decades too late to affect the not officially standardised tradition that
led to C having a pre-processor that would need to have its behavior
specified as part of the language standard.
BTW, whatever caused you to place this in colds, anyway? Utterly
inappropriate... but you probably don't know any more about that than you do
about C. <sigh>
--
Those who don't know history are doomed to whine about it... perpetually.
------------------------------
From: "Chad Mulligan" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: Sun, 25 Jul 1999 18:28:25 -0700
Colin R. Day wrote in message <[EMAIL PROTECTED]>...
>Chad Mulligan wrote:
>
>> >
>> >This may have changed, but the last time I looked, Read-Write support for
>> >NTFS was termed "experimental" and "dangerous". Or has this been changed?
>> >
>> >
>> Slackware 4 doesn't call it that. And thus far it appears to work OK, They've
>> even included security work arounds for ACL's. Don't have enough disk to
attempt
>> this sort of test though.
>
>Oops, it just said "dangerous". This was on the 2.2.5 kernel. Which kernel
>are you using?
2.2.6 with about 47 patches No problems so far, but I'm only copying files from
NT to the Linux partition, not even gonna push the envelope on this one.
Is it just me or, from reading this thread, are these kids having trouble
understanding the concept of streams? It looks like they only handle files by
reading the whole bloody thing into memory, OK for some apps but impracticle for
2GB+ files when you've got a 1GB memory limit, what?
>
>>
>>
>> >
>> >--
>> >Colin R. Day [EMAIL PROTECTED] alt.atheist #1500
>> >
>> >
>> >
>
>--
>Colin R. Day [EMAIL PROTECTED] alt.atheist #1500
>
>
>
------------------------------
From: "robert_c" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.hardware
Subject: UP and SMP (difference in atomic operation and spinlock function?)
Date: 26 Jul 1999 02:42:53 GMT
Hi:
Could someone tell me your advices about race condition in SMP.
In my opinion, to avoid race condition, we can use three methods in LINUX
1. bit operation: (just for binary mutex) we can use test_set, test_and_set,
...
2. atomic integer operation: like atomic_inc, atomic_dec_and_test...
3. spin_lock: like spin_lock, spin_unlock, spin_lock_irqsavc...
But, what's these application (or situation) for above three methods are
used?
My thought is like following.
1. [bit operation]: used in UP and binary lock
2. [atomic integer operation]: used in UP and mutiple integer lock.
3. [spin_lock]: bit and atomic operations are belong to spin_lock subset, so
spin_lock can do anything [bit and atomic operations] they can do. Besides,
spin_lock are mainly used in SMP. So I can use spin_lock in UP and SMP.
(but, if it is used in UP, the perfomance will be little bad)
Best Regards,
Robert
------------------------------
From: "Cute Panda" <[EMAIL PROTECTED]>
Subject: Re: Bug of GCC
Date: 26 Jul 1999 02:37:25 GMT
Dear All Friends,
I never know my posting will arise so many discussions, I have read all
of the postings and now
I agree that code enclosed by #if 0 .... #endif must be "tokenizable", and I
also agree that we
should not use #if 0 ... #endif for doing comments.
Thank to all for giving your point of view.
Best Regards,
Murphy Chen
Cute Panda ���g��峹 <7mlves$nod$[EMAIL PROTECTED]>...
>Hi Friend,
>
> I'm using RedHat Linux 6.0, 4CD Full Package, I encounter a bug of gcc
as
>follows:
>
>
>
>#if 0
>
>It's a bug
>
>
>#endif
>
>
> gcc will parse the lines inside #if 0 .... #endif and then complains
>about "not end of
>string" about "It's a bug", I wonder why gcc parses the lines inside "#if 0
>.... #endif",
>anybody knows why ? please help! thanks!
>
>
>
>
>Murphy
>
>
>
>
------------------------------
From: Stephen Lee - Post replies please <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.misc
Subject: Re: HP CD-RW Supported by RH 6.0?
Date: 26 Jul 1999 01:28:59 GMT
In article <7ncqs4$cg0$[EMAIL PROTECTED]>,
Joerg Schilling <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
>David T. Blake <[EMAIL PROTECTED]> wrote:
>>Jack Steen <[EMAIL PROTECTED]> wrote:
>>> Does anyone know of a
>>> reference to HP CD-RW support under Linux?
>>
>>If cdrecord does not support it, it is unsupported.
>>
>>However, the maintainers of cdrecord are completely insane
>>when it comes to supporting everything they can get specs
>>for. To their credit.
>
>Why insane? If I get specs, the drive is usualy supported some time later.
He is giving you a credit that you spend a lot of time supporting
everything you can, which is such a difficult task that a sane person
would not attempt (and given all the buggy hardware out there, I'd agree).
Stephen
------------------------------
From: Bloody Viking <[EMAIL PROTECTED]>
Subject: Re: when will Linux support > 2GB file size???
Crossposted-To: comp.os.linux.advocacy
Date: Mon, 26 Jul 1999 03:16:27 GMT
In comp.os.linux.advocacy Donovan Rebbechi <[EMAIL PROTECTED]> wrote:
: What about the Merced ? There is definitely "push", but not momentum yet.
: Given that it's intel who are "pushing", I'd expect to see momentum in
: due course.
Trouble is, Intel hasn't started mass-producing 64-bit CPUs yet. Once they
do, then there'll be lots of affordable 64-bit stuff. And the nice thing
is that a 64-bit time_t will be plenty Y2K+38 -compatible. (Y2K+38
-compliant for the purists.) I can't wait for affordable 64-bit gigahertz
machines! A cray in every home! But Windows will slow it down to the point
that it will be slow as a Commodore 64...
--
CAUTION: Email Spam Killer in use. Leave this line in your reply! 152680
First Law of Economics: You can't sell product to people without money.
4321261 bytes of spam mail deleted. http://www.wwa.com/~nospam/
------------------------------
From: [EMAIL PROTECTED] (Rich)
Subject: Re: dual Celeron MB blows up constantly!
Reply-To: [EMAIL PROTECTED]
Date: Mon, 26 Jul 1999 03:27:53 GMT
On Tue, 20 Jul 1999 00:36:04 -0400, Brian Gilman <[EMAIL PROTECTED]> wrote:
>Hello all!
> Well, I go my abit dual celeron board today and have had nothing but
>problems with different kernels.....I wanted to use this board to learn
>about smp and programming threads with smp but, it's just not stable
>enough.....Sigh......Does anyone know what kernel version is considered
>the most *stable* for smp? Thanks in advance!
Well, I've had my Abit BP-6 running 2.2.10 ( 466Mhz o/c to 525 ) for
two weeks straight, without problems.
- Rich
------------------------------
From: "Lim, Sung-taek" <[EMAIL PROTECTED]>
Crossposted-To: gnu.gcc.help
Subject: generating linux GCC as ARM cross compiler
Date: Mon, 26 Jul 1999 14:15:02 +0900
Hi, I'm trying to build intel linux gcc-2.8.1 as an arm cross compiler as
follows.
./configure --target=arm-unknown-aout
/* configuring with anything but arm-*-aout didn't work, so I choosed above)
*/
make LANGUAGES=c
then it complains that some blah,blah assembly instructions are not i386
instructions. When I look into the makefile, it seems that ./xgcc tried to
compile libgcc1.c and i386 as (not arm as) tried to assemble arm code.
So I thought that if I install binutils configured for arm, there will be no
problem but it didn't make any difference. Is there anyone to help me?
Thanks in advance.
--
Lim, Sung-taek
[EMAIL PROTECTED]
http://poppy.snu.ac.kr/~totohero/
------------------------------
From: Zewei Chen <[EMAIL PROTECTED]>
Subject: Strange compilation problems under Redhat 6.0
Date: Mon, 26 Jul 1999 04:52:16 GMT
Here's a simple program that does not compile under the
stock installation of Redhat 6.0.
#include <stdio.h>
FILE *f = stdout;
The error message I get is:
foo.c:3: initializer element is not constant
I've reproduced this problem on 5 different machines,
all with Redhat 6.0, installed by different people. However,
I've not seen this problem discussed anywhere in the news
groups. I suspect an include file problem. Has it been reported,
or has bug fix been posted?
BTW, the relevant packages these machines have are:
$ rpm -qf /usr/bin/gcc
egcs-1.1.2-12 (or egcs-1.1.2-16)
$ rpm -qf /usr/include/stdio.h
glibc-devel-2.1.1-6
$ rpm -qf /lib/libc.so.6
glibc-2.1.1-6
Thanks for any info or pointers.
Zewei
------------------------------
From: Helmut Artmeier <[EMAIL PROTECTED]>
Subject: Re: USB, IrDA, PnP in Linux
Date: Mon, 26 Jul 1999 07:39:33 +0200
Please try theese links about IrDA:
http://www.cs.utexas.edu/users/kharker/linux-laptop/
http://www.snafu.de/~wehe/IR-HOWTO.html
http://www.cs.uit.no/linux-irda/index.html
I am working on IrDA problems and it would be nice, if mr.
'[EMAIL PROTECTED]' :-)) can contact me:
Helmut dot Artmeier at JUMPtec dot de
> Hi,
>
> Any site could I find out the latest info on Linux with USB and IrDA
> Driver/Support/Development ?
> ( as well as PnP, if any ) ?
>
> Thank
>
> --
>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> :< [EMAIL PROTECTED]
> :< [EMAIL PROTECTED]
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
------------------------------
From: [EMAIL PROTECTED] (Mumit Khan)
Subject: Re: Strange compilation problems under Redhat 6.0
Date: 26 Jul 1999 05:14:01 GMT
In article <[EMAIL PROTECTED]>,
Zewei Chen <[EMAIL PROTECTED]> wrote:
>Here's a simple program that does not compile under the
>stock installation of Redhat 6.0.
>
>#include <stdio.h>
>
>FILE *f = stdout;
>
>The error message I get is:
>
>foo.c:3: initializer element is not constant
That's because your code is ill-formed. stdin/out/err are not guaranteed
to be manifest constants in the C standard, and hence you can't do what
you're trying to do. Just initialize it in a function scope. (It does
work in C++ of course.)
One culprit that produces code like this is older versions of lex.
Regards,
Mumit
------------------------------
From: David Schwartz <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.networking
Subject: Re: Why not C++
Date: Sun, 25 Jul 1999 23:47:51 -0700
> > It is not the language's place to decide when you need to break the
> >rules. That's your job.
>
> Except when, like millions of C and C++ programmers, you don't know the better
> portion of the rules, so that you don't know that you are breaking them.
> Constructs along the schema of
I do not believe it is possible to create a useful tool that cannot be
misused.
DS
------------------------------
Date: Mon, 26 Jul 1999 09:49:30 +0400
From: Ilya Bassine <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Problem with glibc compilation (Primary libc)!
Andreas Jaeger wrote:
> >>>>> Ilya Bassine writes:
> > [.../
> > (All works fine till the moment when test-installation.pl.sh
> > starts)
>
> Have a look at the script itself and see what exactly goes wrong. The
> script is scripts/test-installation.pl and compiles a simple "hello
> world" program linked against all libraries. Do it yourself:
>
> - Compile it, check ldd output.
> - Compile it against all libraries (see what the script does). Does
> the script still work ?
>
> Andreas
> --
> Andreas Jaeger [EMAIL PROTECTED] [EMAIL PROTECTED]
> for pgp-key finger [EMAIL PROTECTED]
Thanks!
I've found the bug. It was a symbollink (not listed in HOW-TO) in /lib.
libdl.so was linked to libdl.so.1 (libdl.so.1.9.9), and one of the tests uses
it. This is lib compiled with libc5 and it gives error message.
Friday night I successful installed glibc-2.1.1 The installation was
checked.
After that pgcc, binutils, make were recompiled.
I build mysql-3.22.22, samba-2.0.5a, apapche-1.3.26, php-3.11 with new libc6.
They works!
Ilya
------------------------------
From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: NT to Linux port questions
Date: Mon, 26 Jul 1999 00:07:08 -0700
Another thing you can't 'select' on is a non-blocking file read from a
slow NFS server. You just have to keep calling 'read'.
DS
Robert Krawitz wrote:
>
> Emile van bergen <[EMAIL PROTECTED]> writes:
>
> > Yes. But my point is, your application _will_ (need to) block at some
> > point, otherwise it would consume 100% cpu, which is
> > probably unnecessary. With select() you can wait till _anything_
> > interesting happens. As I said before, there are _no_ events in unix
> > (apart from signals, but they can be converged to file io using a simple
> > pipe write) that cannot be waited for using select().
>
> Unfortunately, that isn't true; the System V IPC (shared memory,
> semaphores, and message queues) can't be. Fortunately, it's not too
> hard to do the same thing with more traditional means.
>
> --
> Robert Krawitz <[EMAIL PROTECTED]> http://www.tiac.net/users/rlk/
>
> Tall Clubs International -- http://www.tall.org/ or 1-888-IM-TALL-2
> Member of the League for Programming Freedom -- mail [EMAIL PROTECTED]
>
> "Linux doesn't dictate how I work, I dictate how Linux works."
> --Eric Crampton
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.development.system) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************