Linux-Misc Digest #68, Volume #28                 Sat, 9 Jun 01 17:13:01 EDT

Contents:
  Re: How to get bigger files? (Dances With Crows)
  Using SONY Clie with Linux (David Fenyes)
  test news ([EMAIL PROTECTED])
  Re: How to get bigger files? (Betastar)
  Re: Apache (Michael Heiming)
  Re: The movie Swordfish and Linus Torvalds ("Martin")
  Re: How to get bigger files? (Byron A Jeff)
  Piping output of "time" command ("Steve D. Perkins")
  Re: Floppy format confusion,HELP!!! (Jeff Pierce)
  Re: Piping output of "time" command (Stu)
  Re: Piping output of "time" command (Stu)
  Re: [Q] serial ports (Ken Mankoff)
  Re: See a man file (Colin Watson)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Dances With Crows)
Subject: Re: How to get bigger files?
Date: 9 Jun 2001 18:28:55 GMT
Reply-To: [EMAIL PROTECTED]

On Sat, 9 Jun 2001 19:52:04 +0200, Jonas Diemer staggered into the Black
Sun and said:
>On Sat, 09 Jun 2001 16:44:54 GMT [EMAIL PROTECTED] (Betastar) wrote:
>> I'm honestly sorry, because I'm sure this has been asked 1,000,000
>> times before... but I've done a Google-search and can't really find an
>> answer that makes sense to me.
>> 
>> I'm running RedHat 7.1 with kernel-2.4.2-2, and I need to have a
>> database file on my system that is larger than 2.0GB
>>
>> (Best case scenario is to tell me how to change my system
>> configuration to take care of it all once-and-for-all.... I've got
>> lots of space, so that's not a problem - 5 36GB hard-drives in a RAID
>
>well, as far as i am concerened, this is a limitation in the ext2
>filesystem, which can't handle files that are larger than 2 GB. I
>recommend changing your filesystem to reiserfs. the easiest way to do
>so is to create a new partition. see www.namesys.com for further
>reference.

How many times does it have to be said?!  The 2G file size limitation is
*NOT* a limitation of the ext2 filesystem, and never has been!  Or was
that 17G partition dump I did several months ago on an ext2 filesystem
me hallucinating?

Grr.  Uninformed people making this mistake are missing a number of
points that would allow them to fix things:

0.  The limitation is twofold and is not filesystem dependent.
1.  The limitation arises from the decision to store some things in
processor-native data types in kernel 2.2 and 2.0.
2.  The x86 architecture uses a 32-bit int for many things.
3.  Therefore, on older kernels on the x86 architecture, file size is
limited to the values you can store in a 32-bit signed int.
4.  Older glibc and all the programs linked against it inherit this
32-bit signed int limitation, leading to the 2G problem.  It never was a
problem on 64-bit architectures like Alpha and Sparc.

So, what can you do?  Upgrade the kernel to 2.4.5, recompile your glibc
against the new kernel, and recompile the applications you use against
this new glibc.  Then everything will use 64 bits for file sizes and
file position offsets, and you can have 2T files.

The fact that you're using RH 7.1 and having this problem surprises me.
We have a server running a stock RedHat 7.0 install, and just yesterday,
I created a 5G file on its disk.  No problems at all.  Check RedHat's
website, search for "large file", see what you find?  It would not
surprise me if there were a few RPMs you could download for large file
support.

(BTW, Jonas, ReiserFS has its own filesize limitation, but it's 4G.
ext2's limit is 2T.  Next time, make sure you know what you're talking
about before posting, OK?)

-- 
Matt G|There is no Darkness in Eternity/But only Light too dim for us to see
Brainbench MVP for Linux Admin /  Outside of a dog, a book is a man's best
http://www.brainbench.com     /   friend.  Inside of a dog, it's too dark
=============================/    to read.  ==Groucho Marx

------------------------------

Crossposted-To: comp.sys.handhelds
Subject: Using SONY Clie with Linux
From: David Fenyes <[EMAIL PROTECTED]>
Date: 09 Jun 2001 13:31:36 -0500

Hello,

Does anybody know of any work to sync the Clie with Linux via USB?  Is
there any overlap with the work done for the Visor?

Thanks,

David.
-- 
David Fenyes  --  [EMAIL PROTECTED]



------------------------------

From: [EMAIL PROTECTED]
Subject: test news
Date: Sat, 9 Jun 2001 18:02:12 +0000 (UTC)

test news
--
Gloria
http://www.orravan.com

------------------------------

From: [EMAIL PROTECTED] (Betastar)
Subject: Re: How to get bigger files?
Date: Sat, 09 Jun 2001 19:02:13 GMT

On 9 Jun 2001 18:28:55 GMT, [EMAIL PROTECTED] (Dances With Crows)
wrote:
>How many times does it have to be said?!  The 2G file size limitation is
>*NOT* a limitation of the ext2 filesystem, and never has been!  Or was
>that 17G partition dump I did several months ago on an ext2 filesystem
>me hallucinating?

Thank you.  I knew I had read that ext2 can handle larger files, so I
was waiting for another reply ;)



>So, what can you do?  Upgrade the kernel to 2.4.5, recompile your glibc
>against the new kernel, and recompile the applications you use against
>this new glibc.  Then everything will use 64 bits for file sizes and
>file position offsets, and you can have 2T files.

I will try this, thank you.  Fortunately we have just recently
installed Linux, so there's not a whole lot on there right now.  
Do I need to recompile things like MySQL server and stuff?  Or just
things I've added since the initial install?

>The fact that you're using RH 7.1 and having this problem surprises me.

It surprised me too.  I thought these problems had been fixed, but
apparently not in the kernel I've got.

>Check RedHat's website, search for "large file", see what you find?  It would not
>surprise me if there were a few RPMs you could download for large file
>support.

Thanks.  I've been trying to look under some combinations of the words
File Size, Limit, Limitations etc.  Didn't think of "large file"

Betastar


------------------------------

Date: Sat, 09 Jun 2001 21:28:31 +0200
From: Michael Heiming <[EMAIL PROTECTED]>
Subject: Re: Apache

Multi User wrote:
> 
> Does anyone know how to enable directory browsing on an Apache Web server?
> I'm running RHL 7.0.
> 
> TIA

Options Indexes
IndexOptions FancyIndexing
DirectoryIndex index.html 

Are the options to check in your httpd.conf, check the docs that came
with
your apache and/or the docs on apache.org, for more info.

Good luck

Michael Heiming

------------------------------

From: "Martin" <[EMAIL PROTECTED]>
Subject: Re: The movie Swordfish and Linus Torvalds
Date: Sat, 9 Jun 2001 20:36:28 +0100

Aw jeez - I was gonna go and see that film, but hey - you've spoiled it now!

Martin

"Arctic Storm" <[EMAIL PROTECTED]> wrote in
message news:[EMAIL PROTECTED]...
> I just saw the movie Swordfish, and the movie has a computer hacker from
> Finland.  I missed the character's last name, but it starts with the
letter
> T.  I guess it was an inside joke regarding Linus Torvalds.
>
>
>



------------------------------

From: [EMAIL PROTECTED] (Byron A Jeff)
Subject: Re: How to get bigger files?
Date: 9 Jun 2001 15:35:40 -0400

In article <[EMAIL PROTECTED]>,
Jonas Diemer  <[EMAIL PROTECTED]> wrote:
-On Sat, 09 Jun 2001 16:44:54 GMT
[EMAIL PROTECTED] (Betastar) wrote:
-
-> I'm honestly sorry, because I'm sure this has been asked 1,000,000
-> times before... but I've done a Google-search and can't really find an
-> answer that makes sense to me.
-> 
-> I'm running RedHat 7.1 with kernel-2.4.2-2, and I need to have a
-> database file on my system that is larger than 2.0GB
-> 
-> I need to know how I can either ftp this file onto my system without
-> it stopping at 2.0GB, or else (for now) I can get a compressed
-> version, but I need to uncompress the whole thing.  (When I use zcat
-> it stops at 2.0GB)
-> 
-> I have no objections to splitting the file into many pieces if need
-> be, but I don't know how to do that through ftp or zcat or uncompress.
-> 
-> I'm really quite new to the whole Linux thing, so if someone can tell
-> me how to do this in simple words ;) I'd really appreciate it.
-> 
-> (Best case scenario is to tell me how to change my system
-> configuration to take care of it all once-and-for-all.... I've got
-> lots of space, so that's not a problem - 5 36GB hard-drives in a RAID
-> 5)
-> 
-> 
-> Thanks very much!
-> 
-> Betastar
-> 
-
-well, as far as i am concerened, this is a limitation in the ext2
-filesystem, which can't handle files that are larger than 2 GB. I recommend
-changing your filesystem to reiserfs. the easiest way to do so is to create
-a new partition. see www.namesys.com for further reference.

Please stop spreading disinformation. This is simply not true. 

The 2GB file limit was always a function of the Linux Virtual File System (VFS)
which prior to the 2.4 kernel was limited to 32 bits for 32 bit machines.
This means that any file system regardless of its actual ability to contain
large files would be limited.

As counterexamples, ext2 on Alpha's and UltraSparcs don't have the 2GB limit
under the Linux 2.2. Also ext2 under 2.4 kernels with proper library support
is 64 bits.

But you need 4 different layers to get this to work right. And if any layer
is missing, then you don't get large file support:

1) Bottom Layer: Actual file system. ext2 is ready for the task.
2) Next: VFS. Either a 2.2 kernel with a LFS patch or a 2.4 kernel.
3) Libraries: The latest glibc's have long file system support.
4) Applications. The apps must be compiled with long filesystem support so that
they call the right library calls.

So your suggestion to change to reiserfs probably will not solve the problem
simply because it's one of the layers above the physical filesystem that's
causing the problem. ext2 supports large files period. However if non of the
layers above it have their large file support enabled, it doesn't work.

A quick look on the  Redhat web site doesn't point to the culprit. But it
does point out that the 2.4 kernel VFS has large file support enabled. So
it's either a library or an applications issue. Look there.

BAJ

------------------------------

From: "Steve D. Perkins" <[EMAIL PROTECTED]>
Subject: Piping output of "time" command
Date: Sat, 9 Jun 2001 15:00:04 -0400

    I am trying to write a "stress test" script for an application I'm developing...
the script spawns multiple instances of the application as simultaneously-running
background processes.  One of the functions this test script should perform is report
the time elapsed during each instances cycle of execution.  The easiest way to
approach this is obviously using the "time" command... something like this:

#!/bin/sh
index=0
while [ $index -lt $1 ]
do
   time ./myApp &
   index=`expr $index + 1`
done

    Now, the problem is that this becomes rather unmanageable when I run tests
spawning more than a small few background processes.  What I need is for the output
from all the "time" commands to be stored in a file, so that I can use a simple Perl
script or something to parse out the data I'm interested in.  I try to alter the
process-spawning line like this:

...
time ./myApp >> output.log &
...

    ... but it doesn't work (whether the pipe operation is placed before or after the
background-process ampersand).  Instead of performing the action I intended
(redirecting output of the "time" command), it redirects the output of the
application itself.

    Is there any sort of unix equivalent to parenthesis in mathematical operations...
or some means by which I can redirect the output of the "time" command rather than
the application it is taking as its argument?



------------------------------

Date: Sat, 09 Jun 2001 16:07:58 -0400
From: Jeff Pierce <[EMAIL PROTECTED]>
Subject: Re: Floppy format confusion,HELP!!!

Excellent about the minor numbers and naming (fd?u* vs. fd?H*). Now I
need to find info about the "new" formats, 1.68 Meg, etc. and what is
required to fromat/use them.

Thanks again. Neat site.

George Dau wrote:
> 
> Jeff Pierce <[EMAIL PROTECTED]> wrote:
> 
> ]I used to think I knew about floppies, but now I find a ton of different
> ]formats and /dev/fd*'s.
> ]
> ]What is the difference between /dev/fd0h* and /dev/fd0u* ?
> ]How do you go about formating a floppy now?
> ]Say I wanted to format a floppy 1.68 Meg, how?
> ]
> ]FAQ makes mention of going beyond 80 track to 83. How do you know if
> ]your drive is capable of this?
> 
> http://members.ozemail.com.au/~gedau/resources.html#FLOP
> --
>  ,-,_|\  George Dau.                                                      __
> /    * \ gedau at isa dot mim dot com                                    / |\
> \_,--\_/ I live in .au, you need to add that                            |--+ |
>       v  to the end of my e-mail address above.                          \_|/

-- 
Jeff Pierce
[EMAIL PROTECTED]
http://pages.preferred.com/~piercej


====== Posted via Newsfeeds.Com, Uncensored Usenet News ======
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
=======  Over 80,000 Newsgroups = 16 Different Servers! ======

------------------------------

From: Stu <[EMAIL PROTECTED]>
Subject: Re: Piping output of "time" command
Date: Sat, 09 Jun 2001 20:26:07 GMT

"Steve D. Perkins" wrote:

>     I am trying to write a "stress test" script for an application I'm developing...
> the script spawns multiple instances of the application as simultaneously-running
> background processes.  One of the functions this test script should perform is report
> the time elapsed during each instances cycle of execution.  The easiest way to
> approach this is obviously using the "time" command... something like this:
>
> #!/bin/sh
> index=0
> while [ $index -lt $1 ]
> do
>    time ./myApp &
>    index=`expr $index + 1`
> done
>
>     Now, the problem is that this becomes rather unmanageable when I run tests
> spawning more than a small few background processes.  What I need is for the output
> from all the "time" commands to be stored in a file, so that I can use a simple Perl
> script or something to parse out the data I'm interested in.  I try to alter the
> process-spawning line like this:
>
> ...
> time ./myApp >> output.log &
> ...
>
>     ... but it doesn't work (whether the pipe operation is placed before or after the
> background-process ampersand).  Instead of performing the action I intended
> (redirecting output of the "time" command), it redirects the output of the
> application itself.
>
>     Is there any sort of unix equivalent to parenthesis in mathematical operations...
> or some means by which I can redirect the output of the "time" command rather than
> the application it is taking as its argument?

Try this instead:


#!/bin/sh
let index='0'
while (( index < $1 )) ; do
   time ./myApp &
   let index='index+1'
done


Stu


------------------------------

From: Stu <[EMAIL PROTECTED]>
Subject: Re: Piping output of "time" command
Date: Sat, 09 Jun 2001 20:36:44 GMT

Stu wrote:

> "Steve D. Perkins" wrote:
>
> >     I am trying to write a "stress test" script for an application I'm 
>developing...
> > the script spawns multiple instances of the application as simultaneously-running
> > background processes.  One of the functions this test script should perform is 
>report
> > the time elapsed during each instances cycle of execution.  The easiest way to
> > approach this is obviously using the "time" command... something like this:
> >
> > #!/bin/sh
> > index=0
> > while [ $index -lt $1 ]
> > do
> >    time ./myApp &
> >    index=`expr $index + 1`
> > done
> >
> >     Now, the problem is that this becomes rather unmanageable when I run tests
> > spawning more than a small few background processes.  What I need is for the output
> > from all the "time" commands to be stored in a file, so that I can use a simple 
>Perl
> > script or something to parse out the data I'm interested in.  I try to alter the
> > process-spawning line like this:
> >
> > ...
> > time ./myApp >> output.log &
> > ...
> >
> >     ... but it doesn't work (whether the pipe operation is placed before or after 
>the
> > background-process ampersand).  Instead of performing the action I intended
> > (redirecting output of the "time" command), it redirects the output of the
> > application itself.
> >
> >     Is there any sort of unix equivalent to parenthesis in mathematical 
>operations...
> > or some means by which I can redirect the output of the "time" command rather than
> > the application it is taking as its argument?
>
> Try this instead:
>
> #!/bin/sh
> let index='0'
> while (( index < $1 )) ; do
>    time ./myApp &
>    let index='index+1'
> done
>
> Stu

I forgot the first part of the question. There are a couple ways to do this, but the
easiest way is to run the command through nohup, it will log the output to nohup.out.
Example:

nohup time ./myApp &

Stu


------------------------------

From: Ken Mankoff <[EMAIL PROTECTED]>
Subject: Re: [Q] serial ports
Date: Sat, 9 Jun 2001 14:41:06 -0600
Reply-To: Ken Mankoff <[EMAIL PROTECTED]>

On 9 Jun 2001, Dances With Crows wrote:

> On Sat, 9 Jun 2001 10:45:19 -0600, Ken Mankoff staggered into the Black
> Sun and said:
> >On 9 Jun 2001, Dances With Crows wrote:
> >> The serial ports that are attached to the motherboard are usually set to
> >> be ttyS0 and ttyS1 (COM1, COM2.)  If you have an internal modem, it is a
> >> good idea to set that internal modem to another port (COM4/ttyS3?) and
> >> possibly a different IRQ if you have a free IRQ sitting around.  If you
> >> don't have a free IRQ, you must make sure that the serial driver is
> >> compiled with the SHARE_IRQ option turned on to be able to use more than
> >> 2 serial ports at once.  You might also be able to set the second
> >> motherboard serial port to a different COM port from within the BIOS
> >> Setup.
> >
> >I switched some jumpers on my modem. It is now using COM3 (according to
> >the modem) and ttyS2 according to Linux.
> >
> >The Wacom is still ttyS0
> >
> >But i get this:
> >% pilot-xfer -p /dev/ttyS1 -l
> >  Unable to bind to port /dev/ttyS1
> >  pi_bind: invalid argument
> >
> >if i use ttyS0 or ttyS2 as the port, it asks me to "press the hotsync
> >button now..."
> >
> >How do i find out what IRQs i have available, and what IRQs i'm
> >currently using?
>
> OK, the fact that you can use ttyS0 and ttyS2 at the same time is a good
> sign.  If ttyS1 is still inaccessible, something is funny.  ttyS1
> usually uses IRQ 3 and I/O ports 2f8-2ff.  Make sure that nothing else
> is using IRQ 3, that COM2 is enabled within the BIOS Setup, and that the
> permissions on /dev/ttyS1 are reasonable.  Usually, ttyS1 is set to
> root.uucp and chmodded 660, meaning normal users can't access ttyS1
> directly.  I don't know whether pilot-xfer is SUID root by default or
> what (I don't own a Palm....)
>
> The commands for checking IRQs and I/O ports are "cat /proc/interrupts"
> and "cat /proc/ioports".  This information can be misleading, though, as
> it only reports usage for modules that are in use.  My modem is on IRQ
> 5, but that IRQ is reported as free except when I have a PPP connection
> running.
>

Hi,

I fixed it. I don't know why this worked, but i simply swapped my Wacom
tablet (ttyS0->ttyS1) with the Palm Cradle (? -> ttyS0) and it works now.

Thanks for helping anyways!

-k.


-- 
Ken Mankoff
LASP://303.492.3264
http://lasp.colorado.edu/~mankoff/



------------------------------

From: [EMAIL PROTECTED] (Colin Watson)
Crossposted-To: comp.unix.questions
Subject: Re: See a man file
Date: 9 Jun 2001 21:00:20 GMT

drsquare <[EMAIL PROTECTED]> wrote:
>On 8 Jun 2001 22:44:19 GMT, in comp.unix.questions,
> ([EMAIL PROTECTED] (Colin Watson)) wrote:
>>drsquare <[EMAIL PROTECTED]> wrote:
>>>On a slightly different subject, how do you convert a man file into a
>>>plain text file?
>>
>>'man foo.1 > foo.txt' will often do. Otherwise, 'groff -mandoc -Tascii
>>foo.1 > foo.txt'.
>>
>>Cheers,
>
>That leaves with a load of repeated characters and control characters.

So either read it with a good pager, such as less, or pipe it through
'col -b'.

-- 
Colin Watson                                     [[EMAIL PROTECTED]]
"... a good part of the remainder of my life was going to be
 spent in finding errors in my own programs." - Maurice Wilkes

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to comp.os.linux.misc.

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Misc Digest
******************************

Reply via email to