Note that the 1702 changes were effectively reversed by 1704.
The 1705 changes are unrelated.
Wrong From
--
Virtualization Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but
Dear PerditionC,
UBYTE DiskTransferBuffer[MAX_SEC_SIZE];
wastes 3,5 KB low memory for *everybody*, not only when it's needed.
regarding how much time we have spend until we had 64 byte free
I think this is a bad idea
please see my previous messages, I know exactly how much space
Dear PerditionC,
UBYTE DiskTransferBuffer[MAX_SEC_SIZE];
wastes 3,5 KB low memory for *everybody*, not only when it's needed.
regarding how much time we have spend until we had 64 byte free
I think this is a bad idea
I also think that these experiments should NOT be in the stable
On Tue, Feb 7, 2012 at 9:51 AM, Tom Ehlert t...@drivesnapshot.de wrote:
Dear PerditionC,
UBYTE DiskTransferBuffer[MAX_SEC_SIZE];
wastes 3,5 KB low memory for *everybody*, not only when it's needed.
regarding how much time we have spend until we had 64 byte free
I think this is a bad
Hi Jeremy, Chris, others,
I am aware they are different...
That is risky - even for a demonstration implementation,
consistency seems important to keep code maintainable...
dynamically adjusted same as MSDOS. That currently is not done.
that has low priority imho, as drives with large
Hello devs, especially Jeremy,
I reviewed your recent commit 1702 to build 2041 regarding sector sizes
larger than 512. Here are my thoughts and suggestions.
- First, dsk.h defines the maximum sector size (MAX_SEC_SIZE) as 2048 (as
the commit comment says) while the relevant LoL entry
- First, dsk.h defines the maximum sector size (MAX_SEC_SIZE) as 2048
(as
the commit comment says) while the relevant LoL entry (maxsecsize) in
kernel.asm is initialised to 4096.
Yes - this is simply for testing purposes; I plan to default both to
512. I would prefer both set from the