Re: [ql-users] new hard disk

2007-05-01 Thread Norman
Morning Ade,

[EMAIL PROTECTED] wrote:

 I must admit, I was assuming Sinclair had used 1024 byte blocks on his
 microdrives - I may need to be corrected on that.
I suspect that 1024 is correct. Although, the freespace/total space numbers (on 
a DIR or STAT) was reported in sectors with a sector being 512 bytes if I 
remember correctly. I think each file had a 64 byte overhead on the first 
sector for the file header which was also the directory entry.

However, having said that, I have never been able to read the 64 byte file 
header, or seen it, directly from the file - even though I've seen it written 
down that it can be done. I'm pretty sure I even used a disc sector editor 
program to check it out - still no joy. I remain to be convinced of the actual 
existense of this phantom 64 byte header per file actually 'in' the file.


 Stephen Usher's description of the perils of formatting is amongst the best
 I've ever seen. It's true that you can lose staggering amounts of disk space
 to a bad file format... 
Yes. I remember the old days when as you added a bigger disc to DOS/Windows, 
you didn't get as much extra space as you thought. The bigger the disc, the 
bigger the cluster size so the bigger your small files actually were in 
reality. This was due to FAT16 (the forerunner to FAT32) only having 16 bit 
numbers - so if you got too many MB on the new drive, it 'adjusted' the cluster 
size to allow the whole disc (subject to some other limit) into a 16 bit 
number. Very helpful indeed - not!


 However, I think anything in the 3 to 4 million microdrive equivalents will
 probably last most of us for a while yet (unlike a 400GB PC disk, which at
 current rates will be obsolete in 18 minutes and 23 seconds).
Hmmm. I'm just wondering how long it would take to feed the afore mentions 3.x 
million cartridges into Dilwyn's Super Disc Indexer/Labeller program to 
catalogue the contents of them all.

Let's see :

* assume 30 seconds per scan (that's optimistic!)
* assume 3.5 m illion cartridges.

So, that's 1.75 million seconds, not including run up/run down and swapping 
over time.

That works out at 29,166 minutes and 40 seconds. That is 486 hours, 6 minutes 
and 40 seconds or 20 days 6 hours 6 minutes and 40 seconds of continual time.

That's a hell of a lot of cartridge labels too :o)


Cheers,
Norman.

___
QL-Users Mailing List
http://www.q-v-d.demon.co.uk/smsqe.htm


Re: [ql-users] new hard disk

2007-05-01 Thread Norman
[EMAIL PROTECTED] wrote:

 I must try and
 write a program to see just how many directories there are.

Assuming Linux (because you mentioned it) how about :

cd \
ls -Rl | grep ^d | wc -l


I got 2,889 on a test system I have here at work - and that's not from the root 
of the drive. 


Cheers,
Norman.

___
QL-Users Mailing List
http://www.q-v-d.demon.co.uk/smsqe.htm


Re: [ql-users] new hard disk

2007-05-01 Thread Phil Kett
[EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:

   
 I must try and
 write a program to see just how many directories there are.
 

 Assuming Linux (because you mentioned it) how about :

 cd \
 ls -Rl | grep ^d | wc -l


 I got 2,889 on a test system I have here at work - and that's not from the 
 root of the drive. 
   
You'd be better off with

cd /
find ./ -type d -print | wc -l

:-)


___
QL-Users Mailing List
http://www.q-v-d.demon.co.uk/smsqe.htm


Re: [ql-users] new hard disk

2007-05-01 Thread Tobias Fröschle
[EMAIL PROTECTED] schrieb:

Norman,
investigation ist simple:
 * ls -lR | grep ^d | wc -l

 11.735 seconds of real time to find 2,889 directories
   
This will put a directory of _all_ files on your harddisk into the pipe, 
grep will search through it and throw away most of it.
 * find ./ -type d -print | wc -l

 1.133 seconds to find 3,451 directories.
   
This will only put directory names into the pipe.

Command I has produced a whole lot more data that needed to be handled 
(and most of it thrown away afterwards), whereas command II only 
produced the data you wanted.
But all that is probably way off-topic.

Tobias
 So, not only is find much faster, it finds more directories too. Hmmm. 
 Further investigation required methinks.


 Cheers,
 Norman.

 ___
 QL-Users Mailing List
 http://www.q-v-d.demon.co.uk/smsqe.htm

   

___
QL-Users Mailing List
http://www.q-v-d.demon.co.uk/smsqe.htm


Re: [ql-users] new hard disk

2007-05-01 Thread Robert Newson
David Tubbs wrote:

 At 15:10 30/04/2007, you wrote:
 
 
I must admit, I was assuming Sinclair had used 1024 byte blocks on his
microdrives - I may need to be corrected on that.

 
 512 byte seectors.
 one map sector, one byte for each potential sector, I had a few mdvs 
 of 250 sectors.

didn't it have 1 word, 2 bytes per sector: the file number in one byte ($f8 
= sector map, $fd = free, $ff = dead) plus the block number within the file 
in the othe byte?


 
 Note tpp, every file fragmented of necessity by the interleave factor 
 allowing the QL to digest the data from one sector before reading the 
 next, some 11 or 13 further on. If the file were contiguous it would 
 require a full revolution between sector reads.

That begs the question of what one means by fragmented?

If the sector allocation is such that some are deliberately skipped 
(interleave) then surely a fragmented file would be one that doesn't use the 
preferred sector(s), which for DOS users (with hard disks) would be the next 
contiguous sector (apparently).


 I had some of the Psion package which were Turbo Load, laid out for 
 optimum pickup speed, they had to be copied by special procedure 
 equivalent to a DOS DISCOPY. 

I never did that, but I had heard of it being done - the file laid out so 
that when the QL had digested the current sector, the next required one 
would be passing the read head...did it take into account scatter[1] loading?

[1] As each sector has a file number and block number, it's position in the 
file is instantly recognised when read and if a later block happens past the 
read head before an earlier one, it is loaded first, into the correct memory.


___
QL-Users Mailing List
http://www.q-v-d.demon.co.uk/smsqe.htm