Linux-Misc Digest #733, Volume #24                Tue, 6 Jun 00 22:13:04 EDT

Contents:
  Re: Opening files (Juergen Heinzl)
  Re: MS Word in Linux (Bob Tennent)
  On mounting disks ("Chew GH")
  Re: Hanging problem in Linux (Michel Catudal)
  Re: Serious fragmentation under Linux (MH)
  Re: Serious fragmentation under Linux (MH)
  Re: Serious fragmentation under Linux (MH)
  Writing CD problem ("Eddy")
  Re: Serious fragmentation under Linux (MH)
  Apache SSI config question ("Eddy")
  Apache SSI config question ("Eddy")
  Re: large disks - partitions problem with RH 6.1 (Leonard Evens)
  Re: Opening files (Leonard Evens)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Juergen Heinzl)
Subject: Re: Opening files
Date: Tue, 06 Jun 2000 23:12:28 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>I am new at linux and i was trying to open a file I had on a cd. What 
>command should i use to open the file?
>Also, when i open a new terminal window and type cd /mnt/cdrom and type 
>the name of the file, i get bash: Permission denied. What should i do to 
>open the file? Do i have to copy the files from my cd to my hard drive? if 
>so what command should i use?
[...]

Try a ls -ld /mnt /mnt/cdrom first. Using /mnt and /cdrom here it looks
like this for /cdrom here ...
drwxr-xr-x   2 root     root         1024 Sep 17  1997 /cdrom
... 

If not the same then do, as root, a ...

chmod 0755 /mnt

... and a ...

chmod 0555 /mnt/cdrom

... write permission does not make that much sense
for a CDROM, just being too lazy to change it here ;)

If there is an entry for /mnt/cdrom in /etc/fstab similar to this one ...

/dev/sr0 /cdrom  iso9660 ro,noauto,user,exec            0       0

... and there is a mode=<something> to be seen ...

/dev/sr0 /cdrom  iso9660 ro,noauto,user,exec,mode=0123  0       0
                                            ----------
... then just remove it for the time being (the mode part, that is).

Please ignore the rest, say /dev/sr0 bla bla bla - this is valid here
and for a SCSI drive and may be different on your machine. See man mount
for more and the various, filesystem specific options but in any case
copy the original /etc/fstab to /tmp or so first. 

Not really a requirement, but since you are new - if you happen to
mess that file up you are in big trouble.

Cheers,
Juergen

-- 
\ Real name     : J�rgen Heinzl                 \       no flames      /
 \ EMail Private : [EMAIL PROTECTED] \ send money instead /

------------------------------

From: [EMAIL PROTECTED] (Bob Tennent)
Subject: Re: MS Word in Linux
Date: 6 Jun 2000 23:49:02 GMT
Reply-To: [EMAIL PROTECTED]

On Tue, 06 Jun 2000 14:35:11 -0700, JCA wrote:
 >
 >    I wonder if there is out there a tool that allows me to read, but
 >not edit, MS Word
 >documents under Linux. 

http://wheel.compose.cs.cmu.edu:8001/cgi-bin/browse/objweb

Bob T.

------------------------------

From: "Chew GH" <[EMAIL PROTECTED]>
Subject: On mounting disks
Date: Wed, 7 Jun 2000 08:13:13 +0800

When changing floppy disks, I find that I need to unmount and then mount
again before the system starts to recognize that the disk has changed. This
is especially so when I am untarring a multi-volume archive and need to
shell to a subshell to carry out the unmount/mount routine before tar can
process the next volume. Is there a way to get about that tedious routine?

Is there support for FAT32 partitions on the 2.0.xx kernels I am currently
using?



------------------------------

From: Michel Catudal <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.setup,comp.os.linux.x
Subject: Re: Hanging problem in Linux
Date: 6 Jun 2000 19:51:03 -0500

Rafael a �crit :
> 
> My Linux (Red Hat 6.2, kernel 2.14 and 6.1) hangs.  I run on the same
> computer Windows 98 and it works without  hangings. I would like use
> only Linux on this computer but I can't. It hangs (freeze), the reset
> button could not restart computer (black screen). I have to turn power
> of. Please help me. What could be the reason.
> I have:
> AMD K6-3 400 Mhz running on 100Mhz bus ( 4x100)
> S3 868 (2Mb) graphic PCI card
> 128 Mb Ram ( 2x64 Mb)
> HD IBM GXP 27GB ( Linux on hda2 (boot- below 1024) and hda6 and swap on
> hda7)
> Screen Nokia 447M
> 
> When I changed bus speed to 95Mhz  ( 4x95) i stop hanging. But next day
> I add additional PCI network card and it start hanging again. Than I go
> down to 83 Mhz bus speed and it seems to not hang. But what is the
> problem , with Windows 95 , 98 and NT I can run eaven in overclocked up
> to 450Mhz with bus speed 112Mhz.
> What kind of the problem it could, is it related to Linux, or to
> hardwareor other problem. Somebody should now this?
> Please help!
> 
> Addotional information:
> I had Linux with the same hardware, but with other motherboard and 486
> 120Mhz, and it worked perfect.
> 
> Please send answer to my email too
> 
> Rafael

What is the voltage on the micro and what is your setting on the motherboard?
On my micro it said 2.3V and the two choices of voltages were 2.2V and 2.4V. Putting 
at 2.2 V resulted in tons of crashes and it works beautifully at 2.4V.

-- 
Vous en avez plein l'casse du plantage avec Ti-Mou?
C'est l'temps d'essayer Linux
http://www.netonecom.net/~bbcat/
We have software, food, music, news, search,
history, electronics and genealogy pages.

------------------------------

From: MH <[EMAIL PROTECTED]>
Subject: Re: Serious fragmentation under Linux
Date: Tue, 06 Jun 2000 17:53:31 -0700
Reply-To: [EMAIL PROTECTED]

Dries van Oosten wrote:
> 
> On Mon, 5 Jun 2000, MH wrote:
> 
> > I'm not sure your explanation makes any sense, unless "non-contiguous"
> > means something else in the Linux world than it does in the DOS world,
> > or in the English-speaking world for that matter.  I understand
> > "non-contiguous" to mean bits of a single file located on blocks
> > separated by other empty blocks or blocks containing bits of other
> > files.  Since files (blocks) are read sequentially, "non-contiguous"
> > necessarily implies a degradation in performance since more blocks have
> > to be traversed to read (or write) a given file.
> 
> You're right. It does imply a relative degradation in performace. But it
> depends a lot on how the files are used and what the nature is of the
> non-contiguous-ness. Do you notice a degradation in performance as the
> number increases?
> 
> Groeten,
> Dries

No--but my concern was that I had reached a relativly high level of
fragmentation in such a short period of time under very light load/usage
conditions.  It seemed that the level of fragmentation could easily
reach a point where performance was significantly impacted under
"production" circumstances, i.e. with a large number of users and high
load/usage.  One of the other posts to this thread suggested some
reasons why fragmentation might be less relevant in a multiuser
situation, particularly where caching is employed.

------------------------------

From: MH <[EMAIL PROTECTED]>
Subject: Re: Serious fragmentation under Linux
Date: Tue, 06 Jun 2000 17:59:39 -0700
Reply-To: [EMAIL PROTECTED]

"Art S. Kagel" wrote:
> 
> MH wrote:
> >
> > Dances With Crows wrote:
> > >
> > > On Mon, 05 Jun 2000 21:27:59 -0700, MH
> > > <<[EMAIL PROTECTED]>> shouted forth into the ether:
> > > >I've more than a few posts regarding fragmentation under Linux.  Most of
> > > >the responses have been to the effect that "Linux doesn't have a
> > > >fragmentation problem".  I beg to differ.
> > > >On a recent reboot, I noticed that I had 11.1%, 15.4%, and 19.8%
> > > >"non-contiguous" files.
> > >
> > > As I'm sure N+1 others will point out, this isn't a problem unless you're
> > > suffering horrible filesystem performance.
> > >
> > > Anyway, the "non-contiguous" report can be somewhat misleading.  Linux
> > > (and Unix in general) manages disk space differently from DOS.  Linux
> > > tries to keep all the blocks of a file near each other on the disk.  When
> > > a new file is created, at least 8 blocks are pre-allocated for it, even if
> > > the file is only 1 byte in size.  If you looked at the raw disk, you'd
> > > probably see files spread out pretty evenly across the disk, with some
> > > buffer space between files.
> > >
> > > Of course, files can get split up, but the ext2fs driver tries to keep
> > > things relatively sane.  So you might have 32K of file1, then 64K of
> > > file2, then the next 32K of file1... etc.  Files are kept within the same
> > > "block group" if at all possible, where a block group is generally 8192
> > > contiguous blocks on the disk.  A block under Linux is at least 1K and
> > > often 4K.
> > >
> > > In contrast, DOS filesystems use the first available block they find on
> > > the disk.  So if you have a DOS filesystem like so:
> > > FILE1.TXT 8K -- FILE2.TXT 4K -- FILE3.TXT 16K
> > > and you delete FILE2 and then create a 32K FILE4, you'd have:
> > > FILE1.TXT 8K -- FILE4.TXT 4K -- FILE3.TXT 16K -- FILE4.TXT 28K
> > > whereas under Linux, FILE4 would be placed right after FILE3, where
> > > there's more free space.  That way, FILE4 could be internally contiguous,
> > > though there'd be a 4K section of free space between FILE1 and FILE3.
> > >
> > > I'm sure that people who are more familiar with ext2 internals could
> > > explain this better, but the practical upshot is, "Don't worry about
> > > non-contiguous files unless performance starts suffering!"
> > >
> > > --
> > > Matt G / Dances With Crows              \###| You have me mixed up with more
> > > There is no Darkness in Eternity         \##| creative ways of being stupid?
> > > But only Light too dim for us to see      \#| Beer is a vegetable.  WinNT
> > > (Unless, of course, you're working with NT)\| is the study of cool. --MegaHAL
> >
> > I'm not sure your explanation makes any sense, unless "non-contiguous"
> > means something else in the Linux world than it does in the DOS world,
> > or in the English-speaking world for that matter.  I understand
> > "non-contiguous" to mean bits of a single file located on blocks
> > separated by other empty blocks or blocks containing bits of other
> > files.  Since files (blocks) are read sequentially, "non-contiguous"
> > necessarily implies a degradation in performance since more blocks have
> > to be traversed to read (or write) a given file.
> 
> There are several reasons why file fragmentation is less of a problem under
> Linux ext2fs and other UNIX filesystems than it was under DOS (and windoze
> IS better about this than DOS for similar reasons).  Let me try to explain.
> On a FAT partition each file is represented by a linked list of block
> pointers in the FAT and a fragmented file can mean jumping around the FAT
> table and around the disk quite a bit.  Because native DOS does not cache,
> smartdisk asside, and because DOS is single user, and so is Windoze, there
> is a big impact on system performance if the drive heads have to be moved
> about alot.
> 
> On the other hand Linux and other UNIX filesystems represent a file with
> an inode which, unless the file is very large, contains pointers to
> contiguous blocks of disk assigned to the file/inode.  All of the blocks
> pointed to by each inode entry, the 8K referred to, are contiguous.   Now
> any file larger than 8K will have multiple entries to other contiguous
> groups of blocks and each entire group is read into the system cache in
> one operation using read-ahead.  Dances explains how Linux tries to keep
> each group contiguous with the last group in a file or at least close.
> This sometimes helps by reducing head movement, however, since Linux is
> a multi-user operating system, head position after a read is mostly
> irrelevant as other tasks and users will also be reading and writing that
> drive and the buffer cache will be flushing older data out also moving the
> drives' heads.  This is why one does not notice as quickly any performance
> impact from 'fragmented' files.  The normal head movement that is part of
> a multi-tasking multi-user system masks most performance impact and the
> intelligent cacheing and inode design improve performance by normally
> fetching the next block from disk to the cache before it is called for
> so that applications do not notice any slowdown.  Intelligent controllers
> and disks (which mitigate for Windoze also somewhat), elevator sorting of
> requests, out-of-order retrieval, etc all go to make fragmentation of all
> but the worst kind irrelevant.
> 
> Art S. Kagel

Thanks for the excellent response.  I now see that my understanding of
this subject was wholly inadequate to the real complexity of the
situation.  It would be enlightening to see some of the caching
algorithms explained--I'm not asking you, or anyone else in this
newsgroup, to do this, merely making an observation.  A much more
interesting subject than I ever expected!

------------------------------

From: MH <[EMAIL PROTECTED]>
Subject: Re: Serious fragmentation under Linux
Date: Tue, 06 Jun 2000 18:01:56 -0700
Reply-To: [EMAIL PROTECTED]

brian moore wrote:
> 
> On Mon, 05 Jun 2000 22:28:21 -0700,
>  MH <[EMAIL PROTECTED]> wrote:
> > Dances With Crows wrote:
> > >
> > > On Mon, 05 Jun 2000 21:27:59 -0700, MH
> > > <<[EMAIL PROTECTED]>> shouted forth into the ether:
> > > >I've more than a few posts regarding fragmentation under Linux.  Most of
> > > >the responses have been to the effect that "Linux doesn't have a
> > > >fragmentation problem".  I beg to differ.
> > > >On a recent reboot, I noticed that I had 11.1%, 15.4%, and 19.8%
> > > >"non-contiguous" files.
> > >
> > > As I'm sure N+1 others will point out, this isn't a problem unless you're
> > > suffering horrible filesystem performance.
> > >
> > > Anyway, the "non-contiguous" report can be somewhat misleading.  Linux
> > > (and Unix in general) manages disk space differently from DOS.  Linux
> > > tries to keep all the blocks of a file near each other on the disk.  When
> > > a new file is created, at least 8 blocks are pre-allocated for it, even if
> > > the file is only 1 byte in size.  If you looked at the raw disk, you'd
> > > probably see files spread out pretty evenly across the disk, with some
> > > buffer space between files.
> > >
> > > Of course, files can get split up, but the ext2fs driver tries to keep
> > > things relatively sane.  So you might have 32K of file1, then 64K of
> > > file2, then the next 32K of file1... etc.  Files are kept within the same
> > > "block group" if at all possible, where a block group is generally 8192
> > > contiguous blocks on the disk.  A block under Linux is at least 1K and
> > > often 4K.
> > >
> > > In contrast, DOS filesystems use the first available block they find on
> > > the disk.  So if you have a DOS filesystem like so:
> > > FILE1.TXT 8K -- FILE2.TXT 4K -- FILE3.TXT 16K
> > > and you delete FILE2 and then create a 32K FILE4, you'd have:
> > > FILE1.TXT 8K -- FILE4.TXT 4K -- FILE3.TXT 16K -- FILE4.TXT 28K
> > > whereas under Linux, FILE4 would be placed right after FILE3, where
> > > there's more free space.  That way, FILE4 could be internally contiguous,
> > > though there'd be a 4K section of free space between FILE1 and FILE3.
> > >
> > > I'm sure that people who are more familiar with ext2 internals could
> > > explain this better, but the practical upshot is, "Don't worry about
> > > non-contiguous files unless performance starts suffering!"
> > >
> > > --
> > > Matt G / Dances With Crows              \###| You have me mixed up with more
> > > There is no Darkness in Eternity         \##| creative ways of being stupid?
> > > But only Light too dim for us to see      \#| Beer is a vegetable.  WinNT
> > > (Unless, of course, you're working with NT)\| is the study of cool. --MegaHAL
> >
> > I'm not sure your explanation makes any sense, unless "non-contiguous"
> > means something else in the Linux world than it does in the DOS world,
> 
> It does.
> 
> > or in the English-speaking world for that matter.  I understand
> > "non-contiguous" to mean bits of a single file located on blocks
> > separated by other empty blocks or blocks containing bits of other
> > files.
> 
> No.  That's normal and acceptable for Unix.  Unix just tries to keep the
> pieces of a file close to each other.  They don't have to be
> consecutive and on the same track/head -- they should just be within a
> couple of tracks.
> 
>   Since files (blocks) are read sequentially, "non-contiguous"
> > necessarily implies a degradation in performance since more blocks have
> > to be traversed to read (or write) a given file.
> 
> Only on a single tasking machine where all of its time was reading
> sequential data.  In the real world, having your data like that would be
> painfully slow.  Ie, interleaving access between two files will give
> you the same effect as "fragmentation".
> 
> The Linux (and other modern Unix-like systems since the days of the
> Berkeley Fast File System) file system is designed for the more common
> case of multiple files accessed at the same time.
> 
> --
As Art explicated in some greater detail above.  Thanks for the
response.

------------------------------

From: "Eddy" <[EMAIL PROTECTED]>
Subject: Writing CD problem
Date: Fri, 2 Jun 2000 21:19:15 +0800

I want to set an IDE-ATAPI CDRW on RedHat 6.1.  According to CD-Writing
Howto, I need to run the following script to make my IDE CDRW as if it were
SCSI CDRW.

The script is as following,
cd /dev/
umask -S u=rwx,g=rwx,o=rwx
./MAKEDEV loop || for i in 0 1 2 3 4 5 6 7; do mknod loop$i c 7 $i; done
./MAKEDEV sg || for i in 0 1 2 3 4 5 6 7; do mknod sg$i c 2 1 $i; done
for i in ide-scsi scsi_mod sg sr_mod loop
do
    modprobe $i || grep loop /proc/modules || echo "Module $i missing"
done
cdrecord -scanbus

But the script me some scsi modules, scsi_mod and sr_mod. Can anyone tell me
to get the sr_mod.o and scsi_mod.o ?
Thanks

Eddy





------------------------------

From: MH <[EMAIL PROTECTED]>
Subject: Re: Serious fragmentation under Linux
Date: Tue, 06 Jun 2000 18:12:13 -0700
Reply-To: [EMAIL PROTECTED]

Floyd Davidson wrote:
> 
> MH <[EMAIL PROTECTED]> wrote:
> >"Peter T. Breuer" wrote:
> >>
> >> MH <[EMAIL PROTECTED]> wrote:
> >> : On a recent reboot, I noticed that I had 11.1%, 15.4%, and 19.8%
> >> : "non-contiguous" files.  Since this workstation is NOT used as a server,
> >> : and since it has only been up for a few weeks, I found this level of
> >> : fragmentation more than a little surprising.  Even more so, given the
> >>
> >> Well (a) who cares and (b) why do you think it's bad?
> >>
> >> Clue: it isn't. You should recall that you have about 100 processes
> >> running in your system at one time, thus the heads are running between
> >> what those processes want to see. They're not waiting for you to scan
> >> linearly over one file.
> >>
> >> It sounds like your partitions are in rw mode and are too small for
> >> their task.
> >>
> >> Peter
> >
> >Actually, maybe a few dozen processes, most of which have nothing to do
> >with reading files from disk.  See response to Dances with Crows--for a
> >clue.
> 
> Peter has a bag full of clues.  He's a very knowledgeable person,
> and you should listen closely to what he has said.
> 
> It simply does not make any great difference.  Can you actually
> figure out when and where it would cause some difference???
> 
> On a multitasking multiuser system (whether it is running just a
> few dozens of processes or the 100 number that includes those
> not permanently in memory and thus not shown by the ps command)
> the heads are randomly jumping all over the disk anyway.  Each
> time slice that a process gets is very small compared to the
> time for disk io, and hence context switching multiple processes
> that do file io causes the exact same effect as disk
> fragmentation to exist all of the time, even with 0% disk
> fragmentation.  Hence even 50% fragmentation would have little
> *added* effect.
> 
> All of the binaries and configuration files that you installed
> to start with are totally unfragmented.  Only files written
> after that can possibly be fragmented.  Hence 99% of what you
> want to read from disk quickly and repeatedly in a sequential
> pattern is unaffected by whatever happens to files later on.
> Additionally, many files such as logs and configuration files
> are not read sequentially anyway, hence fragmentation once again
> is of no consequence.
> 
> But regardless of what I've pointed out, what Peter has pointed
> out, and the extensive answers of others who gave loads of
> information...  even if none of that existed, it still would
> make little difference because Linux uses disk caching.  Any
> file that is read often is read from RAM, not from disk.
> Fragmentation has absolutely no effect once the file is cached,
> and any file reads that are repeated often enough to affect
> system performance are read from disk exactly one time only.
> 
> As to your partitions and how to reduce the fragmentation, there
> are some things you can do (just be warned that this affects
> only fragmentation, not system performance!).  One is to isolate
> all directories that will be fragmentated.  That would be spool
> and tmp dirs, for example.  Put them all on one partition and
> use symbolic links to access them.  For example you have, if I
> remember right, /var and /tmp partitions.  Who cares if /var is
> fragmented??? or /tmp.  Those two partitions are where you want
> to put other directories that might have a great deal of
> write/read/remove activity.  Combine the two of them into one
> partition, mount it as /var and have /tmp and /usr/tmp be
> symlinks to /var/tmp.  Any spool files you find in /usr (though
> in modern systems there should not be any) should also be
> symlinked to a directory in /var.  Likewise any src directories,
> such as /usr/src, should be symlinked and physically on the
> fragmented partition.
> 
> Then make /home a separate partition too, or if space is tight
> put it on the same partition as /var.  One way to do this is on
> a small disk is to mount boot and root partitions (and a /usr
> too if you choose) plus one other on something like /u, /u1,
> /usr1 or whatever.  Then symlink /var and /home and everything
> else mentioned above to that partition.  Basically there are
> non-fragmenting partitions and one fragmenting partition.  That
> means installing a new binary in /sbin or whatever will not be a
> fragmented file.
> 
> It is an interesting exercise to set up the above, but it is
> also a grand waste of time on a modern system. (Years ago it
> made a great deal of sense. And before symlinks were available
> it was an art to preconceive of a partitioning scheme that would
> last for as long as possible before a new scheme to be designed
> and implemented.).
> 
> There are a couple of partitioning tricks that might actually
> help system performance, though I suspect only so slightly as
> to be measurable but not noticable.  Either use one large root
> partition, or put /usr and /var partitions right next to the
> root partition.  For example, on a 30Gb disk, don't put root
> first, /usr in the middle of the disk, and /var at the end of
> the disk.  That will cause the worst case head seek times for
> most of the disk activity.  Swapping is something you obviously
> want to avoid if at all possible, but if it is not totally
> impossible (a 20Mb system where you actually want to run X),
> put the swap partition next to root too.
> 
> Or best of all, use several disks and put all of these things
> on different disks.  Put the most active two partitions on
> the two different IDE channels too.
> 
> And when all is said and done you'll feel good about it, but
> you won't be able to measure the difference.  :-)
> 
>   Floyd
> 
> --
> Floyd L. Davidson                          [EMAIL PROTECTED]
> Ukpeagvik (Barrow, Alaska)

Art's explanation made a bunch of this more understandable, but everyone
seemed to miss the point that I was talking about a single-user
workstation setup, used primarily as learning tool.  In any case, I
believe I now understand why fragmentation is not much of an issue with
regards to a server system.  Disk access patterns in a multi-user
environment essentially mimic the patterns of a single user system that
is fragmented--there is no point in beating a dead horse.  Thanks for
the time you spent with your response.

------------------------------

From: "Eddy" <[EMAIL PROTECTED]>
Subject: Apache SSI config question
Date: Fri, 2 Jun 2000 21:30:24 +0800

>From the Apache website, it says that to setup Server Side Include in
Apache, the following things is needed.

1) In the <Directory \>, add "Options +Include"
2) mod_include is needed
3) Add "AddHandler server-parsed .shtml "

But After I set it, it can't run SSI, is there any wrong with me ?
Thanks

Eddy



------------------------------

From: "Eddy" <[EMAIL PROTECTED]>
Subject: Apache SSI config question
Date: Fri, 2 Jun 2000 21:30:28 +0800

>From the Apache website, it says that to setup Server Side Include in
Apache, the following things is needed.

1) In the <Directory \>, add "Options +Include"
2) mod_include is needed
3) Add "AddHandler server-parsed .shtml "

But After I set it, it can't run SSI, is there any wrong with me ?
Thanks

Eddy



------------------------------

From: Leonard Evens <[EMAIL PROTECTED]>
Subject: Re: large disks - partitions problem with RH 6.1
Date: Tue, 06 Jun 2000 19:05:48 -0500

Christoph Kukulies wrote:
> 
> Christoph Kukulies <[EMAIL PROTECTED]> wrote:
> 
> : I have problems installing RH 6.1 on a 40 GB IDE disk.
> 
> : I installed Win98 first and after trapping into the <1024 cylinders
> : problem with bootable partitions have to be within I created
> : a 2 GB / , 16 MB  /boot , a 4 GB DOS and a 500 M swap partition.
> : The rest of 30514 MB were made a /home partition.
> 
> : I did that all with the GNOME installer booted via a NFS install
> : disk. (the boot disk was from the redhat site and contained already an
> : upgrade which as it seems doesn't fix the problem)
> 
> Sorry, somehow a portion of my post got lost.
> 
> The problem is:
> 
> LILO got overwritten, when I installed something like OSBS208BETA
> a multi boot loader.
> 
> Now I'm looking for a way to establish LILO just on the partition
> rather than in the MBR.
> 
> Is that possible?
> 
> Or is the way always LILO->lilo.conf->multi-OS-boot?
> 
> I dont't want to use loadlin and I also want to
> avoid booting one of the OSs to reach the other.
> 
> The other problem, which is virulent again after I did some minor
> twaeking to the partition table, is that
> the 6.1 installer upgrade still cannot cope with a 2GB DOS partition
> type 6, a 4GB extended DOS and the rest empty on a 39GB IDE disk.
> 
> : --
> : Chris Christoph P. U. Kukulies [EMAIL PROTECTED]
> 
> --
> Chris Christoph P. U. Kukulies [EMAIL PROTECTED]

I don't have experience with your particular situation, and I'm
note sure just what you have been doing, but let me answer one
question.   You can put the lilo boot loader in either a
primary linux partition or an extended partition.   If you are
using another boot loader first you can in some circumstances
put lilo in a logical partition, but if I understand correctly
you are not doing that.  You do that with the
boot=
statement in /etc/lilo.conf.  But IN ADDITION you have to mark
that as the active partition on the disk.
(But this doesn't apply to a second disk.  Some boot loader
has to be on the first disk.)

And don't forget to run /sbin/lilo after changing /etc/lilo.conf.

Finally, if you get the latest version of lilo, you can forget
about the 1024 cylinder limit if your BIOS can handle extended
calls.  (It would be very surprising in a relatively new machine
if it coudln't.)
-- 

Leonard Evens      [EMAIL PROTECTED]      847-491-5537
Dept. of Mathematics, Northwestern Univ., Evanston, IL 60208

------------------------------

From: Leonard Evens <[EMAIL PROTECTED]>
Subject: Re: Opening files
Date: Tue, 06 Jun 2000 19:24:41 -0500

[EMAIL PROTECTED] wrote:
> 
> I am new at linux and i was trying to open a file I had on a cd. What
> command should i use to open the file?
> Also, when i open a new terminal window and type cd /mnt/cdrom and type
> the name of the file, i get bash: Permission denied. What should i do to
> open the file? Do i have to copy the files from my cd to my hard drive? if
> so what command should i use?
> 
> In advance thank you. I greatly appreciate your time.
> 
> --
> Posted via CNET Help.com
> http://www.help.com/

You should get a book about basic Linux and learn a few things
first.  But let me try to get you started.  When you type the
name of the file, Linux assumes it is an executable program and
tries to execute it.  (The exact same thing would have in a
DOS window under Windows.)  If you don't have permission to execute
it or if it is not an executable file, you will get that response.
If the file is a text file, you should be able to see what is
in it with the command

more filename

This will display the file one screen at a time.  Pressing the
space bar will give you another screenful and q will quit.

-- 

Leonard Evens      [EMAIL PROTECTED]      847-491-5537
Dept. of Mathematics, Northwestern Univ., Evanston, IL 60208

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.misc) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Misc Digest
******************************

Reply via email to