Re: [ql-users] Source Code

2002-05-22 Thread ZN

OK, this has now officially gone well into the ridiculous (I'm using a fair
bit of energy to stop myself from using a much stronger word).

The fact of the matter is, this argument over the licence is pointless
because no one can nor should win it, because it's the wrong argument.

There is NOTHING in the licence to stop anyone from contributing extensions
to the OS speciffically to ENABLE using free or commercial stuff as add-ons
to it. As long as that extension does not 'close' a part of the OS, and
presents an added value to the core (and here is where the Registrar has
the last word, which may well be the most difficult part of that job!), it
will be included.

This is equally true for support of speciffic platforms. The support for a
platform per se is not nor should ever be part of an OS core - the ability
to add this support externally SHOULD.
Arguments about a platform not being able to be supported because
programmers writing the support will not want to contribute the support to
SMSQ under the current licence are invalid because they should not
contribute that support in the first place. The part that they should
contribute are the changes necessary to have this support as an external
module, AND THAT'S IT.
All that has to be done is show the registrar that this contribution is
added value to the core in general. There is however nothing to prevent
anyone from contributing the source to a speciffic add-on (for instance, a
driver) to be distributed alongside the official distribution (i.e. sharing
the same media) but that does not have to fall under this licence!
Arguments that basically 'appropriate' the OS under excuse of the added
support for a platform, in order to leverage a specific model of code
distribution are flawed because they are based on a notion that platform
support can only and therefore must be an integral part of the OS and
should be distributed as one lump binary (or source).
* Problem: special platforms like emulators that may have parts of the OS
rewritten as native code. It would be in everyones best interest to devise
a standard way of doing this, not just for a speciffic case but as a
general resource (yes I am aware this is not easy!).

If a contribution becomes a part of the official distribution, under the
current licence the contribution has to be free. May I remind everyone that
by gaining access to the source, you will essentially be able to use code
that someone somewhere has paid for to be written, essentially for free. It
logically follows that any contribution added to the official distribution
must also be free. Also, if we are talking about the resultant official
distribution still being SMSQ, since SMSQ is (c)TT, so is every subsequent
version. Anyone can write a functionally equivalent system, not call it
SMSQ and have it be (c) whoever. It's been done with Minerve and there were
no problems there. Cases where you want to retain (c) should be handled by
only submitting the absolutely necessary part as an extension to the OS
core (like in the case of platform support, see above).

The registrar should not guarantee inclusion of anything, nor it's
persistence in the core, for a very good reason: no-ones contribution is
'the last word' in programming, never to be improved or expanded on - or
even completely replaced. There has to be a means to do 'garbage
collecting'. The best way to insure that a contributed extension stays
there, is to design it well, and in the best interest of everyone and not
just a speciffic group. If someone wants to engage in conspiracy theories,
thay may find more fertile ground for that by submitting them as scripts
for 'The X files' (even though the series has ended) rather than clogging
up this list.
* Problem: there has to be a means to decide which direction of development
is preferred and who decides this, this is where the registrar's criteria
for inclusion/exclusion comes from. This does not mean that the rules
should be included in the licnece, instead, there MUST at least be a
reference to some document containing the rules in the licence. That should
be the basis of any guarantee to fairness when a contribution is considered
for inclusion into the core.

Availability of the binaries in any circumstance cannot be guaranteed, and
it is absurd to even ask this. A meteor could hit the exact spot where they
were kept and they would be lost (If you get my hint). Much of the argument
on this is again based on the notion of binaries for a specific platform.
The core should be general, the platform-speciffic add-ons may (and
probably will) be associated with the 'manufacturer' of the platform are
the responsibility of the said 'manufacturer'. By having a general core
which 'everyone' needs, you guarantee it is available from multiple
distributors, so chances of it getting lost are reduced.
* Problem: a LOT of work needs to be done to SMSQ before it reaches that
stage. This work is ABSOLUTELY NECESSARY in the long run.

Under the 

Re: [ql-users] Source Code

2002-05-22 Thread ZN

On 22/05/02 at 14:28 Dave wrote:

 The part that they should contribute are the changes necessary to have
 this support as an external module, AND THAT'S IT.

So who develops the kernel?

That is a good question.

It is really a cooperative effort, and the key to keeping it that way is
finding a balance between the authority of the registrar and the
contributors. The registrar has the final word on what goes in and what
stays out, but this is balanced by the fact that he can only add what he is
given in the form of contributions. In addition, it is reasonable to expect
that the registrar will get feedback from people who get the new official
core releases, and may consult others about his decisions, so that's
another way his decisions can be influenced.

This is why I mentioned that some reference to a set of guidelines will
have to appear in the licence, the above needs to be formalized.

It seems to me that one major concern is about how reasonable the registrar
will be. The fact of the matter is, no regulations can guarantee a
reasonable registrar - you can only implement a 'security measure' in the
licence.
One way you can do this is implicitly: if the registrar is unreasonable,
the probability of someone sufficiently modifying or completely rewriting
the OS using the source as a reference, to 'free' it from the constraints
of the licence, and doing whatever they want with it, becomes higher. This
possibility may not be such a bad thing (and it is extremely difficult to
do anything against it anyway - with or without available source,
availability of the source just makes it easyer.
Another way is to do it explicitly: for instance, having someone/body that
can veto the registrar's decision. If you want to expand that concept
further, you can appoint a board of 'consultants', which then begs to
define under which circumstances one can become a member, or stop being a
member, etc (after all you have to guarantee that the board is reasonable
too) - and you are well into red tape already.

The reality of the matter is that the registrar is going to consult other
people, and is more likely to consult some people than others. For one, the
author of a contribution will be consulted if the contribution is unclear
in some of it's elements. Then, people like TT, Jochen Merz, Marcel Kilgus,
Joachim Van Der Auwera to name a few, are likely to have stronger voices
than others. It would be very difficult to formalise a board of consultants
right now, but effectively, some people are just that - people who are/were
'closer' to TT than others. The best you could do is to 'invite' a starter
set of people and have that starter board vote in other members - and you
would then have to include a possibility for a member to resign or be voted
out. After that you get into conflict-of-interest issues with people who
are developers and distributors, and defining wether it really is a
conflict of interest or not, etc.

The other way is sort of retroactive - through peer review, i.e. feedback.
Anyone who gets the source can review all the inclusions, and provide
feedback about them. It would even be possible to include, with the authors
permission, contributions that are in the process of being decided about or
even rejected, in a distribution, or even separately. This is really
implied as, again, no part of the OS is 'the last word' and that includes
contributios. As they say, there is always one last bug somewhere.

This has repercussions to the notion of support as well. The registrar has
to keep a trace on who contributed what. A contribution where no support
(guaratees of absolute functionality, usage in life support systems, etc,
etc) is intended or implied, is entirely possible. It would be up to the
registrar to decide about this. Wether there is a board of consultants that
gets to see this and can influence the registrar before the fact of
inclusion, or it's negative feedback that gets it excluded (assuming it
influences the registrar) is something to put in the guideline document
mentioned above. 

 Under the licence, nothing prevents anyone from rewriting the whole
thing
 based on the source, and then doing anything you please with it. As long
 as you don't submit it to the registrar and it's not added to the
official
 release, it is not covered by the licence.

Aye. And if I send 100 Euros to TT, I can get SMSQ, mod it any way I see
fit, and sell those new versions under the first sale doctrine, outside of
the license, as they're licensed copies. Can of worms. :/

This is something that is ONLY up to TT. Wether he has surrendered rights
to further licence SMSQ or not is something that has not been mentioned so
far. This certainly needs qualification in the licence. My guess at this
point would be that TT himself would have to work under this licence as
well. It may give him the right to licence the current SMSQ 'snapshot'
elsewhere, but it should not give him the right to suddenly proclaim
something else the 

Re: [ql-users] gold card and qubide

2002-05-19 Thread ZN

On 19/05/02 at 19:21 Dexter wrote:
On 19 /05/02 [EMAIL PROTECTED] wrote:

 I have a QPower regulator or two which were developed for the QL - £5
 each including postage if anyone is interested..

An alternative approach, when QPower upgrades are no longer available, is 
to replace the 1A 7805 with a 1.5A 7805 voltage regulator. The 1A version 
when worked hard tends to be bumping up against its thermal protection and

shutting down. The 1.5A version works better under heavier loads.

Neither of which have any relevance since neither the Qubide or the GC
applied to a regular QL power from the 5V supply. The regulator in the QL
only powers the QL. The Qubide also hardly uses any current at all anyway.
You may have a problem if it's an older GC, in which case it does not
supply two address lines (A18 and A19), which need to be pulled low using
resistors. This can be done by inserting 1k or so resistors into the
relevant 68008 socket pins  (there are two ground pins which helps). The
Qubide should be set to the address of the ROM slot, $0C000. If the GC does
not start, you may indeed have a power supply problem but if so it is to be
found in the power supply 'brick'. Also, it is possible that a line going
from the QL expansion port to the Qubide through-port is missing or has
somehow failed. The missing line is a candidate in case you are using one
of the pre-production Qubides (there seem to be one or two floating
around!), these look slightly different than the production version - they
have a gray PCB and are slightly bigger.

Nasta




RE: [ql-users] gold card and qubide

2002-05-19 Thread ZN

On 19/05/02 at 20:53 henk verbeek wrote:

Thanks for all your advice but i finally got it to work. I put the qubide
and the goldcard together and got my multimeter. I measured if all the
signals from the ql expansion connector where put through the qubide to
the
gold card. I discovered this way that there was no through connection from
the sp0,sp1,sp2,sp3 from the qubide to the gold card. After this i got my
ql advanced user guide and saw these where the peripheral select lines so
i
just soldered 4 wires between the quibide and the gold card to make the
connection and viola it worked right away.
I am happy everything is working now but the way in which i succeeded is a
bit strange i cannot believe it has to be done this way but i made the
gamble and it works. Is it normal that the qubide doesn't put these
signals
through ??

Henk Verbeek.

Yes, it is not necessary for these lines to be passed, because neither the
Qubide nor the GC (strictly speaking) use them as they were intended
(Qubide does not use them at all).
These lines are grounded on the QL motherboard (you can check that with a
meter!). The GC/SGC use these as extra ground lines figuring they will be
plugged into a regular QL. Apparently, this may be an issue with your GC.
AFAIK there are 3 versions of the GC PCB but up to now I was not aware this
should make any difference on any of them - maybe there are more versions
that I don't know of?

These lines are not used as intended on any current peripheral and in the
future will be tied to ground on anything I'll make.

Nasta




Re: [ql-users] Source Code

2002-05-19 Thread ZN

OK, I've been reading the licence discussion for quite a while and I find
it does make sense for a world where the following is clearly defined that:

1) A generic SMSQ core, common to ALL platforms (*)
2) SMSQ extensions, or more precisely, additions or changes to the core,
start as a submissions to the registrar, and become, if accepted, a part of
the next official core issue if aproved.
3) Add-ons, i.e. is everything that goes 'on top' of SMSQ but is not part
of the core, and is probably speciffic to a particular platform.

[Digression: (*) this 'common to all platforms' is a bit of an idealist
view, a discussion for some other time]

This may seem like an odd argument, but it is paramount for the issue of
developement, support, distribution - not to mention that a clear
definition of the above three is (or should be!) one of the main criteria
used by the registrar to decide what becomes a part of SMSQ and what does
not.

If the above were true, whoever wants to have SMSQ on a different platform,
would not strictly need to distribute the binary, but could instead point
the users to one of the distributors, and offer the necessary add-ons to
the core and a way to link everything, to make it work on that platform,
removing the platform speciffic parts from under the coverage of the
licence, and regulating the distribution and support for said as they see
fit.
In case a developer wants to do something with the core to enable new
functionality, they would be able to get the official source under the
conditions stipulated in the licence, see what and how would need changes,
implement betas and have them distributed under the conditions of the
licence (which I do find somewhat restrictive but not impossible), and
eventually, propose their inclusion into the official generic core.
Provided the registrar was convinced the proposed was or could be
beneficial to everyone (**), it would then be included into the next
official core, at which point the developer can use that to implement
speciffic add-ons of their own, again handled outside of the core licence.

[Digression: (**) a mechanism should be in place for the registrar to
distribute certain beta versions simply because he would be in the best
position to know the key developers for speciffic cases where a proposed
change may have wide impact]

The problem is, this is not the actual situation. Instead, we have SMSQ
which has relatively monolitic parts some of which may be essential for one
group and at the same time of no interest whatsoever for another. Because
of the absurd idea that every platform or flavour thereof should have it's
own SMSQ version, it is simply IMPOSSIBLE to write a licence agreement
which would satisfy everyone to an acceptable degree. Arguments like 'I
paid for a feature and why should I submit it and have anyone but me
benefit from it (financially)' are forever going to be oposed to 'I don't
want to pay for anything because I only do things for free', and that's
only the benign tip of the iceberg. We could collectively come up with a
myriad scenarios in which any given wording of the licence would not work.
I sincerely hope that not even an attempt will be made to cater for
everything!!!

What I would be doing to break this deadlock, is the following: get the
current official source under the current licence. Have a good long look at
it and figure out how to make a generic core from it. Then propose THAT to
the registrar. Sounds like a lot of work for little gains? The way I see
it, this may indeed be true in the short run. But if it's not ultimately
done, we'll soon all be throwing in the towel because without this and a
clear division what falls under the licence and what does not, i.e. without
a clear picture of what SMSQ is and what it can grow into, the best we can
hope for is for a situation where 'read TTs code' will be replaced by 'read
?s code' when it gets into the official release.

NAsta




RE: [ql-users] Haven't seen the sun in a while...!

2002-05-10 Thread ZN

On 10/05/02 at 13:02 Phoebus Dokos wrote:

Actually when the GF will be hopefully available, its onboard DSP will be 
able to deal with tasks like that IIRC the specs Nasta released.

Eh, information is only as good as it is accurate. The DSP is not on the GF
(for it to fit, the printed circuit board would have to be from a 4 - or
more - dimensional space :-) ), but on the Aurora II, which will be the
next project after the GF (and that one will be much simpler and quicker to
do). And I have not released the specs for Aurora II :-)

Phoebus, before you leave for Greece, do you by any chance have the LRESPR
version of the QUBIDE driver? I need something to test the Super/EtherIDE
with :-)

Nasta




Re: [ql-users] QIMI Mouse

2002-05-06 Thread ZN

On 06/05/02 at 15:05 [EMAIL PROTECTED] wrote:

Can someone please confirm what mice will work with this interface??
Will any 3 button PC serial mouse work?? Or do we need an Atari mouse??

Serial mice of any kond will NOT work. You need an Atari, Amiga (needs two
lines swapped otherwise cursor goes the wrong way) or Microsoft Bus Mouse
(Connector must be changed from DIN to D9, see Aurora manual - Aurora
implemets a QIMI compatible mouse interface). MB Mouse may make the cursor
react very quickly (it generates more movement pulses per length travelled
than the other two) but this is adjustable via Qpac/Qram settings window.

Someone must know - the manual does not say!

The manual does not way because the mouse came with the interface and it
was assumed you a) had one and b) it was the right one :-)

Nasta




RE: [ql-users] ROM versions

2002-05-03 Thread ZN

On 03/05/02 at 09:24 Norman Dunbar wrote:

The circular wheel thingy was a Russion Nokolia something or other, back
about 1850 ish (+- 30 years).

Someone is doing the 'black stuff'

Russian (Though I am not sure he was) Nipkovliev. Anyone passing by Munich
can do a quick detour to the Deutsches Museum and see the machine for
themselves. In fact, anyone technically inclined should make it a poit to
visti the DM...

Nasta




Re: [ql-users] ROM versions

2002-05-01 Thread ZN

On 01/05/02 at 16:24 Dave wrote:

 Essentially, when you swap QL ROMs, the video output for TV mode
 changes from NTSC to PAL and vice-versa for JSU  JS, respectively.

 Errm...don't the ROMs contain the CPU instructions...the video output is
 shurely the job of other board chips (ZX8301, etc).
 However, (these are speculation/vague memories [of rumours?]) the screen
 shape is different(?) and so the drawing routines would be modified (so
a
 CIRCLE is still a circle).  Also, would the interrupt routine be running
 at 60Hz as opposed to 50Hz, and so some timing routines would have been
 rewritten?

Timing I think is the same, but aspect ratio is corrected.

 I suspect you're both right and both wrong. A US QL with US timings would
 have problems drawing a UK-resolution screen in the reduced time - I
 imagine the machine would slow down greatly. Similarly, I suspect a UK QL
 with US ROMs would speed up slightly. (Slower scan rate AND less to scan)

No, there would be no discernible difference as the video hardware slows
down the CPU regardless (I think it even does so when it's drawing th
e'blank' parts of the screen that are not part of the actual usable screen
area).
The EU and US specs are actually remarkably similar, and only really differ
in the number of lines drawn on the screen. 312 EU, 262 US. However, both
are drawn on a screen that has a 4:3 aspect ratio, so EU resolution
displayed on US TV would be 'squashed' vertically. IIRC the relevant
routines (CIRCLE, ELLIPSE) are adjusted in the SU versions.
Also, EU TV mode occupies 480 x 240 pixels, US should be 480 x 200 or
thereabouts (otherwise a bit on the top and bottom would be 'off screen').

EU ROM in US QL might result in the picture becoming unstable on TVs and
composite monitors when monitor mode is selected because pixels would
appear in parts of the screen that should be blank and have cynch signals
generated at that point.

To summarise: nothing to spectacular would happen if the ROMs were
exchanged.

 Nasta may want to jump in, but I am also wondering what kind of headroom
 the original QL mobos have for 'overclocking', in the sense of maybe
adding
 a faster clocked 68008FN at 10-12MHz. I think the on-board RAM would
 definitely fail at those speds and may have to be removed/neutered ;)

Actually, it's the 8301 that limits things. The system will stop working
due to read data from the on-board RAM not appearing at the correct time
over about 9MHz IIRC - I used to run my QL faster using a separate
oscillator for the 68008. I had a socketed crystal but my choice of
crystals was quite limited at the time, so I know that it still worked at
PAL * 2, 8.866MHz. I don't remember if microdrives still worked because I
had removed them at that time already. I know net dod not work because
parts of the timing are software generated.

It may be possible to use a bit of external logic to slow the 68008 down on
certain cycles and make it compatible again. I know Stuart H had to solve
that for the GC.
In fact, running the 68008 at double rate would probably make the logic
simpler, I suspect the 8301 generates it's cycles synchronously to the CPU
clock, based on the way the 68008 generates it's signals. In retrospect, it
was a wonder that an asynchronous clock for the 68008 worked at all...

That being said: a board with a 68008 and some trivial logic to get the
correct addresses on an Aurora, would quite efficiently work as a QL with
256k or RAM - pretty much for any speed 68008. A 68008FN and extra RAM,
plus adjustment to apparent extended res screen address, would also give
the user all of the Aurora extended graphics capability. 

I have also constructed (ah, the good old days!) a 68020 add-on board. The
68020 is quite nice because it has an 8-bit bus mode. However, the timing
is different from 68008 (faster even at the same clock). I did a lot of
guesswork when doing that circuit and more logic would have been needed to
get it to work right. I wish I had the ability to do GALs at the time, it
would have made it all much simpler - as it was, I used a few 74xxx chips,
but that wasn't enough, so I could not get it to work right. The OS would
come up (at that point I had my own RAM expansion which shadowed the
internal 128k to make things faster so it did not rely on data returned
from the on-board RAM by the 8301) but the picture was garbled and
communication with the 8302 didn't work (no wonder since it's decoded by
the 8301) so there was no reaction to F1/F2. I whipped this up on a small
piece of protoboard during the week-end so it was no great loss when I
scrapped it. The whole thing was inspired by a piggyback board that was
used in the Thor20 and 21 to add a 68020 to a QL motherboard (the Thor 21
had a 68881 FPU added too). I would probably make short work of it today
since now I have a logic analyzer and it would be very easy to see what the
8301 actually does - if I had a standard QL handy. One interesting thing
which I found out was that a 

Re: [ql-users] ROM versions

2002-05-01 Thread ZN

On 01/05/02 at 19:40 Dave wrote:

However, the English PAL system is BETTER! ;P

Oh yes. Especially on a 100Hz TV :-) but a decent capture card and large
monitor will do fine.

 To summarise: nothing to spectacular would happen if the ROMs were
 exchanged.

Ow... I was hoping for a rift in the space-time continuum.

There may well be one by the time I'm done with GF. That thing has so many
components in such a small space it may well collapse into itself under
it's own gravity :-)

 Actually, it's the 8301 that limits things. The system will stop working
 due to read data from the on-board RAM not appearing at the correct time
 over about 9MHz IIRC - I used to run my QL faster using a separate

 What if the onboard RAM has been disarmed, and there is faster memory in
 its place? And there was a 5lb heatsink on the 8301 ;)

Surprisingly, the 8301 doe not need a heatsink! It's eminently
overclockable. Don't remember what speed Branko used it with but a 24MHz
crystal instead of a 15MHz rings a bell. It's not the on-board RAM that's
the real problem - it's the 8301 being incapable of asynchronous work with
the 68008.
Two ways to solve this is:

a) Use an Aurora. It will work fine even if the bus runs at 16MHz instead
of the normal 7.5. (Tony F: if you are reading, any progress with Ron?)

b) Use external logic to synch faster 68008 to the 8301. It's been done
before - Thor 20/21, GC, SGC. It helps if shadowing is used on the built-in
128k RAM. Using a 128k SRAM in addition to the 512k that I described in the
2-chip RAM expansion, plus some logic (a GAL would do great here) would
solve that.

Ah, so many interesting projects, so little time...

Nasta




Re: [ql-users] ROM versions

2002-05-01 Thread ZN

On 01/05/02 at 15:02 Phoebus Dokos wrote:

I have also constructed (ah, the good old days!) a 68020 add-on board...
I did a lot of guesswork when doing that circuit and more logic would
have been needed to get it to work right... I would probably make short
work of it today since now I have a logic analyzer and it would be very
easy to see what the 8301 actually does - if I had a standard QL handy.

I volunteer mine :-) You interested? (I will be gone in two weeks and I 
have three QLs in Greece so I won't miss it =_

Well, you are welcome to send it over. No need to have a US one, either.
I'll just hook it up to the video in on the PC, and that's multi-standard.
I'll have a QL in a window :-) Muahahaha (Mad scientist crazy laugh)

I also know that Branko B. had overclocked the whole thing... Worked just
fine, too.

That would be interesting to see...

Worked just fine, and appeared to be very stable. Nice picture on monitor
at 70Hz refresh :-) Not much heat either, since he only used one bank of
1Mbit RAM (today that would be two chips at most, then it was 8).

I've used a 68008FN to expand a QL myself. It was a borad that took up to
3MB of RAM on 30 pin SIP modules. Hm, that reminds me - I may even have a
schematic diagram for this somewhere. It was a one-off I did for a friend
using perfboard + transformer wire technology :-)

Nasta




Re: [ql-users] memory expansion standard ql

2002-04-30 Thread ZN

On 30/04/02 at 23:01 henk verbeek wrote:

 Does anyone know how to upgrade the memory of a standard (classic) ql ? I
 have searched the net but i didn't find a circuit or anything. Are there
 no do it yourself circuits available ?

Done it many times in many ways. By far the simplest is to find a 628512
(aka 614000) static RAM chip, usually comes in a 32-pin surface mount
package but regular DIP 32 is also available. This is 0.5MB in a single
very low power chip. In addition, you need two 100nf capacitors, one
74HC153 logic chip and one 1N4148 diode.

Here is the connection (use fixed width font)

  74HC153
  +--\__/--+
GND -|1 16|== +5V
A19*-|2 15|- GND
+5V -|3 14|- A18*
DSL*--o--|4 13|- GND
  +--|5 12|--o-- +5V
+5V -|6 11|--+
RAM(A) --|7 10|- GND
GND ==|8  9|--||- DSMCL*
  ++  A K

  +--\__/--+
A18* |1 32|== Vcc
A16* |2 31|- A15*
A14* |3 30|- A17*
A12* |4 29|- RDWL*
 A7* |5 28|- A13*
 A6* |6 27|- A8*
 A5* |7 26|- A9*
 A4* |8 25|- A11*
 A3* |9 24|- RAM(A)
 A2* |1023|- A10*
 A1* |1122|- RAM(A)
 A0* |1221| D7*
 D0* ---|1320| D6*
 D1* ---|1419| D5*
 D2* ---|1518| D4*
GND ==|1617| D3*
  ++

* - signals from the expansion port, all but DSMCL can be found on the
68008 cpu, all but DSMC, DSL, RDWL, A19, 18, 17, 16 can be found on the ROM
slot or either ROM socket. The ROM sockets have the advantage of having the
correct pinout if the bottom of the chip is alligned with the bottom of the
socket (socket pin 14 goes to chip pin 16), for all pins except RAM pin 3,
22, 24, 29 and 30.

Here is the ROM chip socket for comparison (both sockets are wired the
same):

  +--\__/--+
 +5V |1 28|== +5V
A12* |2 27|- A14*
 A7* |3 26|- A13*
 A6* |4 25|- A8*
 A5* |5 24|- A9*
 A4* |6 23|- A11*
 A3* |7 22|- ROMOEH*
 A2* |8 21|- A10*
 A1* |9 20|- A15*
 A0* |1019| D7*
 D0* ---|1118| D6*
 D1* ---|1217| D5*
 D2* ---|1316| D4*
GND ==|1415| D3*
  ++

If you want to put this RAM expansion into one of the ROM chip sockets
(plus the few extra wires you need), you can take both ROMs and put them
'piggyback' one onto the other, carefully solder the pins together (do this
with pins facing up so solder does not go onto the pins of the chip that
will plug into the socket), and plug the combination into any one of the
two ROM sockets.

100nF capacitors go between +5V and GND pins on 74HC153 (pins 8 and 16) and
on RAM chip (pins 16 and 32). RAM(A) points are all tied together, i.e. pin
7 on 74HC153 goes to pin 24 and pin 22 or RAM chip. The whole thing uses
negligible amounts of current and you can 'steal' the +5V off of pins 16 on
the 74LS275 multiplexers or pin 20 of the 74LS245 buffer ship. The RAM is
zero-wait and will go as fast as the CPU can go, about twice as fast as the
built-in 128k.
To make things easyer to connect, you can connect the address lines (A??)
and data lines (D??) to the chip in any order, as long as you never mix
address and data.
628512 can sometimes be found on older hard drives from WD and Quantum,
820M or 1-2G capacity, normally near the IDE connector. They will be
surface mount as a rule. With these, one has to be carefull because they
come either in the normal case (pinout as above) or the 'reverse bent' case
with different markings (left and right side of the pinout are changed
around).

A variation of this circuit is possible where it is on a small piece of
printed circuit that plugs into the CPU socket, then the CPU itself plugs
onto that.

With some more logic, more RAM chips and a 68008FN CPU (in a PLCC 52 case
instead of the standard DIP48) it's possible to add almost 4M of RAM but it
will only work with some JS (and JS based) ROMs and Minerva.

Nasta




Re: [ql-users] memory expansion standard ql

2002-04-30 Thread ZN

Can't believe I made an error in something that simple... but I did.
Correction:

Needs one more 1N4148 diode. RAM(A) K||A DTACKL*

There :-)

Nasta





Re: [ql-users] USA 2002 QL Show, 1st June 2002

2002-04-28 Thread ZN

On 28/04/02 at 19:08 Marcel Kilgus wrote:

Urs König wrote: 
 - What other Europeans are consindering to attend the US QL show?

I'm guessing that includes me too :-)
I'll be there!

Nasta




Re: [ql-users] QeyMail question...

2002-04-23 Thread ZN

 By having a plugin filter rule system, we can also direct emails that
contain
 this is not spam, !! and insert offensive words here can be
directed
 straight to hell...

A few things not often found, but very useful to have in order to block
spam:
1) Delete messages without plain text (only HTML attachment)
2) Delete messages with over that N recipients or forwarder to over N
recipients
3) delete messages with 'undisclosed recipients'

I think a good solution here is to have two modes of operation: online and
offline - online, it will simply get sender/size and if you want to view
the message it will download the body. Offline mode, it will connect and
fetch all messages and delete them from the server. You cvan then read
them offline.

How about auto-online from offline:
Load headers, apply header based filters, then download what's left?

Nasta




Re: [ql-users] QeyMail question...

2002-04-23 Thread ZN

On 22/04/02 at 19:13 Dave wrote:

I'd give an example of the sort of SBASIC syntax I'd like, but someone
would post how you already do it ;)

ip$ = RESOLVE(site$)

Well, I wish S*basic had long integers (would be a bit interesting to find
a character to append to a name to signify long, though...), and maybe a
pointer operator ( of course :-) )...

But back to the topic at hand:

ip$ = RESOLVE(site$,[DNS ip$]) :-) just for the occasion when you want to
do a DNS search outside, or with different order than the DNS list
maintained by the system (this implies a DNS Thing...)

Nasta




Re: [ql-users] QeyMail question...

2002-04-23 Thread ZN

 For example, I've recently learned of the difficulty of moving files
 between machines. Headers are a wonderful thing.

There could always be a QL specific type defined... basically a regular
file with the header included, distinguished from the ones without the
header included by some suitable extension etc.
Kind of like our own MIME within a MIME... as long as sender and recipient
do the proper encode/decode, it would not be overly envumbering...

Nasta




Re: [ql-users] 68040 to 68060 upgrade

2002-04-21 Thread ZN

On 21/04/02 at 12:53 Phoebus Dokos wrote:

I found a little adaptor that enables the upgrade from a 68040 to an
68060...

Which reminds me: Stuart Honeyball (= Mr. Miracle himself) planned to do
such an adaptor for the QXL but ultimately decided against it because he
could not find a source of 68060's at a reasonable price at the time. The
working name for it was the O-Daptor. I used to tell him if it was an
O'Daptor, he could say it was Irish :-)

Nasta




Re: [ql-users] Wee_Haa... Got a QXL!

2002-04-20 Thread ZN

Phoebus Dokos wrote: 

 but I was asking if you can assign a .win file (any QXL.WIN file,
 regardless of filename or disk - a la QPC :-).

The only way I know that works with QXL, assuming you are using DOS is to
use SUBST. Essentially, create a directory to emulate a dribe, then do
SUBST X: directory (or was it directory X: ...not sure, it was a while ago)
where X: is an unused drive letter  K. Keep in mind that the letter to win
mapping seems to be hard-coded. So, D will be win2_ etc. if you already
have a D and can't put a QXL.win file on it (for instance if it's a CD
ROM), tough luck :-(

Nasta




Re: [ql-users] gccDhrystone 2.1

2002-04-20 Thread ZN

On 19/04/02 at 20:41 Phoebus Dokos wrote:

I am going to try and find a full blown 040 and also to push it up a tad
to see what gives :-)) (Yeah I know! Speed freak!).

I think I have one somewhere. I know I have a full 40MHz.
The key to QXL speed-up is a decent cooler on the GLUE chip. The 68040
never seems to be a problem, Motorola chips are eminently overclockable. Of
course, it too benefits from a cooler - nothing like the requirements of
current PC CPUs though!

Nasta




Re: [ql-users] gccDhrystone 2.1

2002-04-20 Thread ZN

On 20/04/02 at 02:12 Dexter wrote:

 QPC of course beats the QXL (any QXL unless a PCI version materialises, 
 -Knowing Dave that's not entirely impossible!-) by leaps and bounds in
 disk access and graphics.

Hehe, no thanks - that's one project I *don't* want! Anything involving 
PCI is a nightmare. So yes, it is entirely impossible. Maybe Natsa may
want
to explain just how much harder it is to initialise and run a PCI card...

Assuming you meant 'Nasta', well, it's a nightmare (nocturnal female
horse?) even describing everything that's involved, AND it would be a post
too long even for me. That should be enough for anyone :-)

Nasta




Re: [ql-users] gccDhrystone 2.1

2002-04-20 Thread ZN

On 21/04/02 at 02:05 Dave wrote:

 Natsa the Dr Evil version of Nasta

Nasat is the Dr. Evil version of Nasta, not Natsa.

Nasat

Mhuuuahahahaha :-)




Re: [ql-users] More on QL hardware?

2002-04-18 Thread ZN

On 17/04/02 at 20:24 Phoebus Dokos wrote:

Hello again,
here's stupid question no. 2.

No, actually good question!

Since I don't remember its specs but I do remember that most multisync 
monitors cannot go low enough for the QL to output on them...

Most, but not all.
QL uses 15625Hz Hsynch, 50Hz Vsynch (probably same Hsynch, 60Hz Vsynch for
the US version).
There are older multisynch monitors that will work:
NEC Multisynch up to model 3D (regular, +, 2, 2D, 3D). 3D will not be able
to stretch the screen all the way to the edge.
Sony Multiscan CPD1301, 1302, 1401, 1402, 1425. Used a 1402 for ages
(indestructible, one of my friends now uses it in a small video studio he
has), and I still have a 1425 with a blown high voltage transistor here,
that I have to fix. I Keep it just in case I need to look at something
based on PAL or NTSC scan rates.
Sony also produces several RGB monitors intended for broadcast, which can
be found on Ebay on occasion, in screen sizes from 5 to 22. And the DO
make ONE multisynch (something 2000, not following their usual naming) that
will do anything from 15625Hz to 80kHz horizontal - and it's black, too!
Several other manufacturers used to make monitors that could work with the
QL, usually these were one of the first 15 monitors around and all at
'premier' prices. Translation: these days you find then second-hand for $30
but they were built to last forever and manufacturers bent backwards to
provide more functions than the next in those days.

Is the QL compatible with CGA monitors (or was that only TTL and not
Analog?)

Yes, VSYNCH needs to be inverted, needs to be negative, is positive on QL.
This can frequently be done in the monitor.

If it is I just found a nice adapter from CGA to VGA that could take care 
of the rest (and for under 5 dollars too :-)

This is VERY interesting. Potentially could make QL (with cable adaptor)
work on VGA. For $5 you can try it and tell us :-)

Nasta




Re:[ql-users] More on QL hardware?

2002-04-18 Thread ZN

On 18/04/02 at 11:46 Phoebus Dokos wrote:

Where can you get the CGA-VGA adapter?

Here
http://www.pc-extras.com/prods/adcgav.html

Ah, don't bother. It only adapts signal levels - will not work unless the
multisynch monitor supports low synch frequency...

Nasta




Re: [ql-users] Another stupid question.... re Floppies

2002-04-17 Thread ZN

On 18/04/02 at 00:04 Dexter wrote:

 It is all explained in Keith Mitchell's hardware docs. Every other line 
 in the ribbon is an earth. By twisting lines 10 through 13 you are 
 swapping the drive 0 with the drive one signal with the earth line as a 
 pivot. Quite what the PC one does is not clear.

As I understand it, there are two drive select lines, allowing selection 
of one from 4 drives, so on a PC cable they flip those, and their grounds.

Actually, originally the shugart standard (inherited from 8 drives) had 4
drive selects and one motor on signal. When drives still had jumpers, you
could choose which one of the 4 selects to use, and wether drive select
also implies motor on, motor on is only alowed on a selected drive, or the
drive will turn on the motor when the motor on signal is active, regardless
of select signals.

On the PC the drive was jumpered to use select 1. There are two motor on
and two drive select signals, but they are set to the actual 4 signal pins
of the standard, and actual selection has to be handled by twisting the
cable because that way the correct select and motor on appear where the
drive expects them. 

These days drives do not have jumpers any more. there are two selects and
two motor on's, the correct combinatiopn is achieved using a twist in the
wire (the unused select and motor on end up on unused pins on the drive,
that once used to be select 2, 3, 4).

The twist for miracle drives may actually be for SGC, so you can use the
right angle connector for drive 1 and 2 instead of 3 and 4.

Nasta




Re: [ql-users] Another stupid question.... re Floppies

2002-04-17 Thread ZN

On 17/04/02 at 23:50 Phoebus Dokos wrote:

[Floppy connectors]

 Hmm so two drives can work simultaneously (theoretically?)

Surprisingly, yes! In fact, the NEC 765 controller (the same core is used
on many PC based controller chips, including ones used on GC/SGC) can do
parallel head positioning by using multiple selection (for instance, if
drive 1 has to move 40 tracks and drive 2 60 tracks, both drives are
selected at the same time for 40 tracks and both move 40 tracks, then drive
2 is selected for the remaining 20) but AFAIK this was never used and does
not appear in later incarnations of the same FDC core, especially within
highly integrated IO chips.

 Wouldn't that affect the data transmission?... IIRC there's no way to
 distinguish where the data is coming from on the Shugart I/F

Correct. Data would be logically OR-ed (looking at the signals one would
expect AND-ed because they are active low, and if any data out is low, then
the controller sees low since all data outs on all drives are in parallel
and open collector), but the data is actually inverted. For all intents and
purposes, data would be corrupt. However, you CAN write data in parallel.
In theory, you could do the write portion of format for both (or more)
drives at the same time.

IIRC newer versions of the 765 controller have encoded select bits in the
registers (two, to decode to 4 drives), but the chip itself most certainly
has 4 separate outputs. Now, depending on the chip itself, these can be
used in different ways (some can even have the usage selected based on
control reghisters in the chip). On the chip used on GC/SGC, they can be
set independently so any combination is possible in theory. On a more
recent IO chip with many functions built-in, due to the limitation on
number of pins, usually PC speciffic DS1, DS2 and MOTOR1 MOTOR2 are
present, and they can only be set in speciffic patterns.

 So theoretically with the right twist you can access the drives even if 
 they are prejumpered for DS1 right? (which brings me back to these IBM ED

 drives we talked about a while ago).

Correct. But it's a bit more complex than that. Some drives detect density
and signal to the controller and their own internal circuits, some expect
the controller to select density for them (using mode pins) and (maybe)
pass the density encoding to the controller. Also, there is the disk change
signal, which again may or may not be generated by the drive or expected by
the controller. This all used to be set using jumpers. Of course, where
there were jumpers, you had the problem of figuring out which ones do what,
so a lot of trial and error may have been needed to make it work. Now that
there are none, the problem is worse because you have no idea what it's set
to, and no way to change it on the drive, you can only manipulate signals
on the cable.

BTW density and/or mode pins (don't remember off the top of my head) are
quite non-standradly set to odd pin numbers, which are by the standard, all
ground, so using a drive that expects control on them on the QL will always
result in whatever density is set by those pins being tied to ground,
unless appropriate lines on the cable are cut. AFAIK QL controllers do not
have density pins (they expect the drive to take care of itself) and
density detect is by trial and error (ditto maximum head positioning speed,
hence things like FLP_JIGGLE).

Also, some combinations of density as generated by the controller and as
expected by the drive based on what it finds from the holes on the disk,
will not work. In general HD drives often, and ED drives always require the
detect holes on the disk to match the density the controller is trying to
use. This is important when formatting disks to a lower density or in a
higher density drive. Normally, formatting SD in a DD drive will work just
fine because it's just the encoding of data that changes - the data rate
does not. Formatting DD in a HD drive without plugging the HD detect hole
on the disk may be just luck - for HD the data rate and some other
parameters are different. In this case the 'DD' disk (HD formatted to DD)
can be read in a HD drive, but is unreliable or completely unreadable in a
real DD drive. HD disks formatted to DD need to have the HD detect hole on
the disk plugged up to work right. Formatting ED drives as anything but ED
without appropriately manipulating the holes on the disk will make them
unreadable for sure on non-ED drives, and unreliable on ED drives. This is
because ED uses a mechanically different method of recording, so the
magnetic patterns on the disk end up being different, although the
electrical encoding may be as intended for a lower density.

Asside: Phoebus, I would sure like to get the ED drives working because
they were made while Christ was still walking this earth so the quality is
still there, unlike today's drives made out of recycled chinese metal
children's toys...

BTW: Nasta as it getting a little late and I am kinda lazy 

Re: [ql-users] CF Hot Removable adapters

2002-04-15 Thread ZN

On 15/04/02 at 05:40 Dexter wrote:

 Unless you two know something I don't (like how to locally change the
 Planck constant, speed of light, gravitational constant...) that's as
 fast as anything will currently go on native QL hardware, give or take
 a few 10s ok K/s...

What?!?!?
You don't know how to change the speed of light?
I'm disappointed in you, Nasta!

Sadly, I'm not Q. If I were, I'd wave-my-hand-ed myself to greener pastures
already.
(and let's not even start about what I'd do to Micro$oft)

Seriously though, my point was that it isn't necessary to hook up a CF 
card to an IDE interface to use it - there are ways to hook it up directly

to the buss.

Through the ROM slot it would not be that different than it already is.

 That way, you can do direct reads and writes without having to wait and 
 poll to check it's finished the previous operation. That would be
quicker.

OK, there is ambiguity here.

There is no reason why Qubide should necessairly use polling (in the sense
it uses interrupts), the reason it does what it does the way it does is
because that was the only driver available at the time. Now that SMSQ/E
source may become available, we might be able to convert the win driver
already done by TT for Miracle, and expanded for QXL, QPC, Q40/60 to work
on Qubide. Even the RomDisq driver could be converted to handle a CF
hanging off an IDE interface, but it would have to be VERY considerably
altered.

Polling as a concept is unavoidable for anything on the ROM slot, and there
is a reason why RomDisq is a ROM slot device, so that it can be easily
removed and plugged in elsewhere. Reason: no interrupt line on ROM slot.
This is also proof that CF on Qubide could work just as fast (strictly
speaking, slower due to larger overhead of a task file, but this would be
imperceptible). Setting up the task file and doing it's little protocol is
only writing and reading a couple of bytes, nothing compared to
transferring 8k or so of data after that. Remember, next to no 'access
time' on CF media. No interrupt on Qubide, but no interrupt on RD either.
 
A wait on write is also unavoidable for any type of Flash media due to
characteristics of the media itself. Reading from the Flash takes
nanoseconds, writing may extend to the order of a few milliseconds.

Either way, the point that a version two of RomDisq could be based around 
CF instead of traditional flash, for price and size considerations, still 
stands ;)

True. If you read the specs though, you will notice that the different CF
modes really are not THAT different. Even in the 'memory' mode it's not
just 'flat' memory, but bank-switch, and the polling on write becomes more
explicit, similar to actual flash chips. The good thing about CF, even in
IDE mode, is that it MUST implement byte-wide data bus (unlike hard drives
which do not, and they VERY rarely do), although it's natively a 16-bit
wide device. Interfacing it to the QL bus on it's own would be nearly
trivial, as long as it's directly coupled to it (and that means NO cable to
the front of a box like in the case of Qubide!!!). A RomDisq II (maybe
'superRomDisq') would be very similar to what it is now, except without the
actual Flash chips of course, but with CF socket instead. Things would get
a bit (but only a bit) more interesting if the CF card was hot-swap, the
biggest challenge for that would actually be mechanical stability. Another
challenge would be having the data organization such that data could be
read on other CF capable systems!

Nasta




Re: [ql-users] CF Hot Removable adapters

2002-04-14 Thread ZN

On 15/04/02 at 03:44 Dexter wrote:

The readers are passive, and some CF media support ATA-66, and while these

CF readers aren't designed for ATA-66, a new version of RomDisq that uses 
CF media instead of traditional flash RAM could take advantage of the 
extra speed ;)

Once again, I point out that the bus speed limit is 1.8MB/sec which is
pretty much overwhelmed by simple PIO2... Unless you two know something I
don't (like how to locally change the Planck constant, speed of light,
gravitational constant...) that's as fast as anything will currently go on
native QL hardware, give or take a few 10s ok K/s...

Nasta

sigh...




Re: [ql-users] QXL-Cards

2002-04-11 Thread ZN

On 4/11/02 at 1:22 PM Phoebus Dokos wrote:

 As for US QL-ers wanting to buy a QXL rather than get one free (?)
 I have a 8MB 33MHz one for sale. $100 buys it, and the money goes
 to my GF fund. Oh, and I just remembered: I have a 40MHz 68040 that
 would work on it too...

 Is that the same QXL (and 68040) that was featured in that odyssey of
 yours in IQLR :-)

Yes.

 IIRC it worked fine up to 35 MHz (or did you fix that?)

Oh, it runs fine at 40 as well (with the 40MHz CPU), when I was writing the
article, I only had oscillators going up to 70MHz (QXL uses a 'canned'
oscillator at twice the CPU clock). I had a 75MHz but that was ECL rather
than TTL and my makeshift ECL to TTL converter didn't really work very
well, which was the reason for problems above 35MHz.
The real stopper for the QXL is the Altera 1810 programmable logic chip. It
runs hot even doing nothing (hotter than CPU) and like all CMOS based
logic, the hotter it gets, the slower it can run. I have a nice little
cooler on it but I guess it would go higher with a different one (say one
that would extend over the 68040 as well and have a fan fitted). These days
getting coolers is really easy, a 'PC Chipset' cooler is just fine for the
CPU and overkill for the Altera chip.

Nasta




Re: [ql-users] Aurora problems

2002-04-10 Thread ZN

On 09/04/02 at 12:23 Phoebus Dokos wrote:

Hi all,
I am turning to the list for any ideas...
As you might have guessed at last I have a working (well sort of) Aurora 
setup... However I experience a couple of problems, mainly with
resolutions.
...
Specifically, I give DISP_SIZE 640, 480 but nothing happens (still
512x256).

OK, this has been solved.
The soulution has two parts: the usual one, and the 'read the manual'
(oops, sorry, don't think it's in there!) part.

The usual part is taking out the 8302 ULA and cleaning it's pins, which for
some reason have a tendency to oxidise. They do this even when left alone
in a drawer - I have three spares and they all had their pins turn black
while sitting peacefully in my QL spares box, in their antistatic rail.

The ...other part is quite simple in retrospect: SMSQ/E was not detecting
the Aurora, hence always thought it was working with regular QL compatible
hardware. How? Simple: the problem has nothing to do with Aurora or SMSQ/E
- rather with the address setting of the Qubide, also present in the
system.

IMPORTANT!!!
When an Aurora is used, only the following two base address settings are
legal:
0C000 (i.e. ROM slot)
FC000 (top of IO area)

Obviously, if you are using the Aurora with a GC for any reason, there is
no IO area so you only have the 0C000 address to use. If this is set, in
either case the ROM slot will be disabled, and anything plugged into the
slot will be 'invisible'. At present, there is no way around this problemm
save for using a SGC so the address can be set to FC000.

So, why did SMSQ/E fail in changing resolutions? Simple: C to FBFFF
(this appears as 4C to 4FBFFF on SGC) are used for the Aurora screen
RAM. Anything else set to any part within the same address range (like a
Qubide, for instance) will result in Aurora screen RAM test fail during
initialization of the Aurora screen driver, and the system will revert to
emulation mode. ALL Aurora extended graphics support, including DISP_SIZE
will be switched off. There may be other consequences, such as superHermes
not being initialized correctly.

USers that start the system with a JM ROM have a unique problem here
because this ROM will not initialize any IO peripherals unless they are at
0C000 or C. AFAIK, the GC/SGC software corrects this, so there should
be no problem. SMSQ/E also corrects it. Minerva also looks for ROMs in two
alternative addresses, 1 and 14000. If you are using GC/SGC and Qubide,
DO NOT attempt to use these addresses because GC/SGC do not support them
and Minerva will not find a Qubide at these addresses even if everything
'should' be just fine.

If you have a ROMdisq, and use SMSQ/E, you can set the Qubide to FC000,
load SMSQ/E from ROMdisq, at which point SMSQ/E will correctly initialize
Qubide. Alternatively, you can use the LRESPR-able version of Qubide or
figure out the Qubide init routine address from it's ROM header and just
CALL it 'by hand'.

Nasta

PS can't believe I'm doing Qubide support after all these years!




Re: [ql-users] QL Forum

2002-04-10 Thread ZN

On 08/04/02 at 21:26 Dexter wrote:

 However a European PSU will work with a US QL if you use a step up 
 transformer...

I've seen transformer winding kits up to the job for $20 in a local parts 
outlet. Could be fun - been ages since I last wound a custom transformer 
:o)

Oh, for heaven's sake, why would you need a custom transformer? Step-up's
are available in every shape and size from many places. Also, with dozens
and dozens of 'old stuff' distributors in the US I'm sure you can find any
type of transformer you care to immagine (www.alltronics.com, www.meci.com,
www.jameco.com, etc, ect)

 P.S. Damn that 6 Gb drive takes forever to format with QubIDE ;-)

How long would the 80 gigger I just bought take then? Hopefully, it'll be 
quicker on the SuperIDE :o)

Fraid not, at least not for a while. Nothing to do with xxxIDE, but with
the speed of the QL's 8-bit bus. The peak bandwidth is only 1.8MB/sec and
even a bit less with GC/SGC. All of this really to cater for the 8301 video
ULA. Even with Aurora, which is capable of about 5.5MB/s things get stuck
at 1.8 because the GC/SGC deliberately slow down when communicating through
the QL bus, just in case there is a 8301 somewhere - even if it's in the
spare parts bin :-)
GF's IO protocol is software-customizeable, and can push the 8-bit bus very
far (probably too far) - in theory, up to 12MB/sec but that's still only
ATAPI PIO3 or so, IF signal integrity on the bus remains sufficient.

Nasta




Re: [ql-users] OT: Clive Sinclair working for the French??

2002-04-09 Thread ZN

On 09/04/02 at 17:49 Tony Firshman wrote:

 Clive Sinclair working for the French??
 Just that I saw a TV advert last night for a Citreon C5

Citroen C5 has nothing to do with Sinclair, though I am sure their
marketing department must have been caugh unaware by the history of that
'name' in the UK.
...you get to preach that there are better ways to drive around on 4
(and actually, even 3 - some will work without one rear) wheels!

Oddly enough on the way back from the recent Eindhoven QL show, a
Citroen in front of us had a flat tyre and diesel was pouring out of it.
I pumped it up for him - but I see he could have driven without it (8-)#

If it's one with a hydropneumatic suspension, yes, it can go without one of
the rear wheels, if the suspension is set to high (although this is really
only for emergency). The suspension is self-leveling and it will try to do
that even with no wheels - most of the weight being in the front helps with
the 3 wheel situation. I've done it with the one I used to have for a bet
(well, they wouldn't believe me until I showed them!). Unfortunately, this
ability can also be a detriment. There is hardly any difference in driving
with one of the rear tires completely flat, unless the car is substantially
loaded, and even then it's not any great problem. If your hearing isn't so
good, you may end up with a burning flat tire...

As for the state of the car you mention, this is not surprising. Most
Citroens, especially with hydraulic suspension get to be worn into the
ground rather than serviced. The service outlets are it's biggest enemy and
they are expensive, so people just keep driving on... I've had my hands
(head, feet, body) in/under/over quite a few, the stories I could tell...
Contrary to popular opinion, they are really not that complicated, just
very different (sonds familiar? Almost on topic here :-) ).

Nasta




Re: [ql-users] QL Forum

2002-04-05 Thread ZN

*** REPLY SEPARATOR  ***

On 06/04/02 at 02:50 Dexter wrote:

[Developer forum]

 Anyway, it's already a failed idea, so I'll work on something else.

Don't pronounce it dead yet... there are a couple of new topics.

 some advertising material for the Qeyboard :o)

By all means. You've been very quiet lately (I know, pot, kettle, black,
etc...)

Nasta





Re: [ql-users] Moving memory blocks

2002-03-30 Thread ZN

On 28/03/02 at 19:28 Phoebus Dokos wrote:

 Marcel tells me the screen base can change esp. on QPC - I have no
 info on Qx0's and the Aurora is not the case here)

On the Aurora, once it's running in non-emulated mode (extended graphics
not QL emulation, that is), the base stays at the same place at all times,
for all modes. The line length (offset to add to address to get to a pixel
immediately below the current addressed) changes from 256 bytes for 4, 8
color modes to 512 bytes for 16, 256 color modes.
IIRC the first part is true for Qx0 as well, not sure what happens to line
length. 

As Jochaim said, such a move operation should be done as an atomic
operation (iow.xtop), so that you only have to get the screen base
once at the beginning (which you have to do anyway). Outside of
iow.xtop it can move, yes.

Phoebus, you should look up 'atomic' as pertains to the QL :-)
The point is not to detect a screen address move, rather to prevent it
happening mid-way through an area move. To insure this, the area move must
be made atomic, hence not interruptible by any job or task that would
result in a screen base move. Note that this does NOT necessairly mean
disabling the interrupts...

Why not just add an offset to the already calculated address?

True too see I didn't think about that as moving irregular areas was stuck

in my mind for some reason... So yeah both you and Nasta must be right :-)

It's the same even for irregular areas, just the offset varies. Your lookup
table, or rather, list approach is valid for irregular areas but it
competes with a masking plane approach, where a one-bit deep bitmap is used
to provide a 'mask' - where a bit is 1, the pixel has to be changed, where
it's 0, it stays the same. The advantage of the latter is that any type of
mask shape uses exactly the same, and pre-determined, masking field.
However, when only a few bits need to change or only a few bits need to
remain unchanged, this concept wastes memory.
A list/table concept is better for 'sparse' areas which nevertheless have a
relatively simple shape, as the list size increases with shape complexity
rather than size. If the shape needs to change dynamically or even worse,
algoryhmically, the size of the list/table can be unpredictable, and cause
all problems with memory management.
Surprisingly, there is a middle way, a variation to an image compression
algorithm - a recursive tile tree. What this does is divide the area into
tiles, then the tiles into smaller tiles, etc, until you get to pixels. For
each division stage there is a bitmap showing which tiles need to be
changed, and then for each tile, which subtiles of it need to be changed,
etc, down to the pixel level. While the size of the total tree cannot be
predicted for a dynamic system, the minimum and maximum size it can ever be
are easily predictable. Using precalculated values for tile offsets speeds
the process considerably, but it's recursive nature is not extremely well
suited for speed.

Wrong premise - wrong conclusion. You only see 2 multiplication in my
whole code above.

Exactly, though things like multiplication and division should really be
avoided if possible, unless they can be reduced to shifting. OTOH, on some
CPUs multiplication only takes a single clock so it may not really be a
problem.

Right but in any case, wouldn't a continuous access to a table in memory
be cached (as you say elsewhere?) thus increasing access significantly?

NO. And I fully mean the capital letters here :-)
A cache only has any use if data within a small 'distance' from a given
address is accessed repetitively, and this distance should also ideally be
smaller than the cache size. Perusing a list for read-only-once per
algorithm pass will just result in usefull frequently accessed stuff in the
cache being flushed for data that will only be needed once, and which is 
cache size.
All this does depend on cache policy used, however, for stricter policy, it
would only mean that usefull data already in the cache will not be replaced
with something that will be used only once, so the reading of the table
itself stays slow.

In any case this whole question was mainly to see which method would be 
more efficient speed wise.

There is very little doubt as to which one would be the fastest -
especially in assembler where the needed operations are trivial, and also
may happen implicitly happen/overlap with other operations.
Also, it pays to have the data aligned to long word or even four long word
boundary on CPUs with 32-bit bus. So, if you are transferring something to
an odd numbered X coordinate, it pays to do a word move for the first
pixel, then proceed with as many long moves, and possibly finish with
another word if the number of transferred pixels is even.
Although 68k CPUs do not make this a requirement, if you write a long word
to an even word address, the CPU will perform two word writes, the first to
write the LSW in long 0, then second to write MSW in long 1 - because 

Re: [ql-users] Moving memory blocks

2002-03-29 Thread ZN

On 29/03/02 at 01:38 Marcel Kilgus wrote:

In SBasic everything is f...airly slow
 ^

Good save there :-)

Nasta




Re: [ql-users] Moving memory blocks

2002-03-28 Thread ZN

On 28/03/02 at 13:37 Phoebus Dokos wrote:

Hi all,
I was wondering if anyone has tried moving screen memory around using a 
lookup table instead of calculating the memory position of each pixel
every time...

The question is: what is so difficult about calculating a pixel position?
And, more importantly, where did you get the idea that rectangular
graphical objects are moved around (or indeed bitblt operations are
performed) pixel by pixel? If at all possible, you ALWAYS move the maximum
unit of data the CPU can move, which in this case is a long word, or even
several long words on systems capable of loop mode and burst acces. In case
of non-aligned pixels aligned ones and non-aligned ones are calculated
separately.
Also, pixels to the right are a constant bit number apart, which means
calculating successive pixel addresses is trivial. Pixels down are also
easily addressed by adding a line length offset.

That table would be an X (the x coordinate) by Y (the y coordinate) and in

each x by y position, the screen memory address that this corresponds to 
would be contained (and maybe the alternate memory location for
swapping)...

OK, for a 32-bit pixel address, and a max 1024x1024 screen a table takes
4MB. Two alternate addresses, 8MB. Assuming an address is an offset to a
fixed screen address, 20 bits are needed to describe it, which is not a
multiple of any convenient CPU data format (byte, word, long), so we are
now into calculating positions in a table and masking off excess bits -
sounds familiar? Just change positions in table with positions on screen.
The table is still 2.5MB. If we go the other way around, and map pixels of
the graphical object rather than the screen, each graphical object would
get 4 bytes of a table for each pixel - not counting the pixel itself - for
a 32-bit address, 3 bytes for a 24-bit address with problems on access, and
finally 2.5 bytes for a 20 bit address with even bigger problems on access.
Considering that the deepest pixel is currently 2 bytes, it's at the very
least a 125% inflation in size of any object.

All of these are completely trivial calculations. I am sure this concept
could therefore have been evaluated for practicality (and discarded!)
before proposing it to the general public.

Surely this wastes memory

This is about the same understatement as saying you are slightly hot
standing in the middle of a bonfire

 but I have the feeling that this could SUBSTANTIALLY accelerate bitblt

Not even the slightest - accessing anything outside the CPU is many times
slower than calculating with values in registers. This is why you keep your
accesses to memory as limited as possible, and when you make them, you move
as much data as possible in a single access.

Nasta




Re: [ql-users] Drawing in the save area (SMSQ improvement)

2002-03-24 Thread ZN

On 3/18/02 at 9:14 AM Tony Firshman wrote:

 a) 'Regular' memory is generally quite substantially faster
 than screen memory

 Wasn't it 30%?

I forgot to answer this one - actually, it's closer to 50% when compared to
true no-wait-state RAM on a standard QL.

Things quickly get worse once any sort of 'accelerator' is fitted. For GC
and SGC memory shadowing is used so only writes are generated into the
screen RAM ($2-$2 if both screens are enabled). since the CPU is so
much faster, the slow screen RAM appears that much slower - the efficiency
is 12.5% for regular RAM writes for GC, and about 4% on SGC - reads proceed
at full 100% in both cases. Based on the logic definitions for the GF, it
is a bit more difficult to calculate because of extra tricks used, but for
successive writes, the efficiency is 1% AT MOST running the QL end of
things faster than standard - it is less than 0.5% on average.

All of this is simultaneously better and worse for Aurora. Aurora video RAM
accesses are nearly zero wait, so there is no speed penalty compared to
_standard_QL_ - this means write efficiency for GC and SGC when accessing
SCR0 and SCR1 addresses, is doubled. In theory, the increase could be
fivefold if the GC/SGC were aware of there being an Aurora, and not
thinking they are still running connected to a QL motherboard with a ZX8301
ULA on it. And that's the problem. When the extended resolutions/modes are
used with SGC, it is not aware that the Aurora uses it's IO expansion area
as screen memory. Because of this, the SGC does not use any memory
shadowing for the extended screen memory and it performs reads from it as
well as writes. As a consequence, operations with extended resolutions on
Aurora are actually slower when a lot of reading is required from the
screen memory, such as scroll/pan. While the write efficiency is increased
from 4% to 8%, read efficiency is reduced from 100% for the shadowed SCR0
and SCR1 addresses, to 8%. This penalty would dissapear for the case of the
drivers performing all their operations into the save areas. The screen
sweep task is write only to the screen so there would be no more read
penalty (write penalty unfortunately must remain)

Low efficiency both for read and write would be completely unacceptable
with GF, so it implements a few extra tricks, and a flexible shadowing
mechanism. It also uses the full Aurora bandwidth (about 2.5x compared to
an Aurora/SGC combo). Nevertheless, write efficiency still remains at 5% at
most - the 68060 is just so fast!

 b) The address of the screen memory can really be anywhere,
 only the 'sweeper' task needs to know where.

How does the second screen concept fit into all this?

Not directly, but it could. It would enable updates to be performed into
the screen that is currently not visible, once done, the screens would be
'flipped' - this would prevent flicker or frame shear, but with a task
devoted to screen updates, there are other mechanisms that can be used for
the same effect.

The independance of the actual screen address from the drivers is a great
plus for any expansion of hardware or even adding extra features to
emulators. The reason is simple - the original two screens are already at
the bottom of usable memory and cannot expand without moving system
variables up, which has already been proven to be a problem. Because of
this, any new hardware has to be located in some kind of IO area above the
highest usable RAM addresses. As RAM increases, these tend to move. For
instance, on a regular QL, that's the top 256k out of the 1M address space
($C and up). On SGC, there is 4M of RAM which means what used to be the
IO area is now RAM, and the IO area is now the top 256k of the 5th
megabyte, $4C to $4F - signifficant because with an Aurora, this is
used as the screen area. The same thing occurs on the Q40/60, but this time
the screen has a dedicated area at very high addresses. On the GF even the
original SCR0/1 and the ROM slot are moved up to very high addresses, which
however depend on what shadowing and caching option is chosen for screen
RAM. The move signifficantly simplifies logic and it alowes selectable
caching, shadowing and serialization (nothing to do with serial ports!)
options in the first place.

With the new concept of screen drivers, the screen address changes in ONE
place only, as do all the other parameters that would otherwise have to be
changed in the drivers, and sometimes even in the applications.

NAsta




Re: [ql-users] Q40 to ql transfer

2002-03-21 Thread ZN

 I mentioned that I did not know whether or not the Q40 had the NET
 ports. THAT is why I suggested that a connection between the Q40's
 serial port and the QL's network port might work ...

 Al

It doesn't really matter why it was suggested because it wouldn't work (not
without serious extra stuff) - and furthermore, it would be rather odd
attempting it with two serial ports a few cm away. Definitely the most
logical way to connect is between two ports of the same kind!

I understand the point was probably (and I'm guessing because that
certainly wasn't what was written) the avoidance of the 'odd' QL serial
connectors. I always need to remind myself about this because I changed
mine for DB9 within a week of buying my (first) QL anyway :-)

The net port and the serial port on the QL have no internal connections.
However, IIRC the internal circuits of the 8302 ULA which handles both
serial transmit and net receive/transmit only use one serial output,
internally multiplexed, to provide serial output on the serial ports (both)
and on the net port, so only one can transmit at a time (I 'divined' this a
long time ago from a definition file of the 8302 registers that came with
the GST assembler... don't ask me for details, those neurons have been
reused).
Observing the ports in question (with not entirely adequate equipment!) may
have resultind in the impression that the signals are the same - they are
not. Also, the net port only has two pins, really - one is ground and the
other is a bidirectional input/output (it is however externally combined
out of separate in and out pins on the 8302 ULA). The only reason why two
connectors are provided is to avoid the need for a T splice and terminator,
that would need to be employed dependant on the cabling (it's similar to
coax ethernet in this respect). The idea is sound, but the execution, due
to substandard quality connectors, is not :-( I'm sure Tony F. can
elaborate!

Nasta




Re: [ql-users] Drawing in the save area (SMSQ improvement)

2002-03-17 Thread ZN

Joachim wrote:

 As I already mentioned, I am willing to volunteer for the following
 changes in SMSQ.

- have all windows draw their contents in the save area
- background update of windows (same mechanism)

Ye! :-)

In general I propose the following changes.
- Application should not draw onscreen, but in the save area. When they
properly use the iow.xtop call as exists at the moment, nothing needs to
be
done, except that the use of iow.xtop will no longer be necessary (the
call
now gives the address with the screen base, so that makes things easy). In
practice the applications which work in extended resolutions will not need
any changes.

Exactly.
Also, emulation for applications that need to use the original screen areas
would actually be easyer to implement. They would effectively get one save
area that just happened to be at $2.
Further important side effects:
a) 'Regular' memory is generally quite substantially faster than screen
memory
b) The address of the screen memory can really be anywhere, only the
'sweeper' task needs to know where.

- When switching jobs, the picture on screen does not have to be copied to
the save area.

But it has to be copied from the save area to the screen. This does make
things slower but also FAR more flexible. The most important side effect is
that the actual hardware organization of the screen doesn't need to have
anything in common with the format of the save area - although of course,
if it does, it can make things a bit faster.
The great advantage is that:
c) Applications can be written to be compatible across platforms regardless
of the actual hardware - even using modes that do not yet exist in hardwre
d) Extra operations can be performed when a save area is copied to the
actual screen, such as palette operations to convert shalower color depth
save areas to a deeper screen, or dithering to convert a deeper save area
to a shallower screen

 - A scheduler task needs to be written. This should run through the
window
 pile and for all windows check whether the contents has changed, and
update
 the lines where the changes took place. In principle it should be
 straightforward to write the routines which does this under the
assumption
 that everything needs to be updated. The drawing routines should be
 adjusted to set a bit to mark the line in the save area which has
changed.
 (Optionally, it might also be good to mark the save area as changed.) If
 these bits are stored in a separate vector, they could be tested using 32
 rows at once and consume little extra memory.

Even for huge windows there would hardly be a problem with the memory
requirement as only one bit is needed for every row. In fact, the concept
could be extended to 'tiles' instead of rows. Dividing the save area into
fixed size tiles which can actually be quite large, can already
substantially speed up things at next to no cost in additional data
structures that keep track of what has changed. However, one has to keep in
mind that the maximum number of pixels that ever needs to be updated by the
'sweeper' is the number of the pixels on the screen.

 At a later stage, this approach could be extended to allow save areas to
 have a shallower bit depth than the screen, thus saving even more memory.

I think this is quite important even at an earlyer stage, simply because
variable depth save areas mean variable save area 'mode', which in turn
provides rather desireable 'device independence'. To make programs run on
other platform, of course the required mode drivers have to be present -
but this is easy if they already exist on other platforms! - and the real
difference is only really in the 'sweeper' task, which has to convert the
mode of the save area into the actual screen mode.

I should point out that these changes will eventually have to be reflected
in the UI (PE). For instance, there is no reason why a program should not
use a save area larger than the screen, or one that is partially
'off-screen'. Managing windows with these and many other properties
(palette, color conversion...) is not present in the UI yet - in fact, it
could be left entirely to the UI and not be something for the application
to manage.

Nasta




Re: [ql-users] The future of SMSQ/E

2002-03-13 Thread ZN

On 3/13/02 at 10:06 PM Marcel Kilgus wrote:

 Now that you mention stippling, it is my opinion that when having
 16 bit colours there is no need for stipples anymore (I mention this
 because I've heard that Tony invested quite a lot of thinking about
 how to get stipples in). What is the general opinion about that?

My opinion would be that it would be nice to have a thing that could
calculate a 16BPP (or 15BPP) stipple of sorts out of a 24BPP definition so
applications can use this but I don't think it really needs to be
implemented as part of the PE (not to mention that it seems to be very
difficult). I can hardly see a need for a user interface to use more than
32/64k colors.

 Hmm, on the one hand there's already the normal palette mode which is
 well defined and I think it's unlikely that the user changes it.

 Which one would that be? i hope we are not talking about the 256 color
 system palette imposed by the PC? Or am i missing something?

No, I mean the palette SMSQ uses on palette calls. It's predefined
with certain colours (list is at the end of GD2 docs) and I think that
most people will leave it this way.

Ah... I see.

 IMHO this might prove to be a serious problem in the future given
 the philosophy of WMAN - simply because the save areas increase
 8-fold, the consequence of which is the speed of their manipulation
 decreases by the same factor.

 I know...
 Most manipulations aren't really much slower than their 2bit
 equivalent, the calculations are a lot simpler. The problem comes when
 big chunks of memory need to be altered

That's precisely what I meant. Scrolling must be the biggest issue, but in
an environment with background update, this is an issue one cannot really
solve (just thinking in advance here).

 Finally in the end there should be a standard graphics library thing
 which can be consulted for all things involving graphics manipulation,
 like colour conversions so that not every single application has to
 implement that stuff etc. And those routines could be accelerated,
 too.

All I can say is: AMEN!

 It really undermines one of it's best qualities: speed/simplicity.
 Just don't get me wrong, I don't think this is something that needs
 immediate attention...

 Well, I really hope that people make their applications more
 colourful. I do encourage to brush up older applications

Agreed - the question being, HOW colorful do they need to be? Certainly not
4 colors. 16 would already be more like it, 256 is already pushing it a
bit. Mind you, the idea is to have whatever number of colors chosen from a
15BPP palette, so essentially, they get more colorfull eve if they only use
4 colors.

 e.g. Jochen is currently trying to get the QPAC2 source code so that
 we/I can extend it. This is very important in my eyes).

Agreed.

 But I need to know how much save area to allocate beforehand. What
 happens if an application is considered 2bit only and all of a sudden
 it issues an iow.papt (true colour paper trap) call with some exotic
 colour?

The result would be the use of the aforementioned thing that does color
conversion, and the result of that would be a 2-bit stipple definition. See
below.

 What I meant is that hardware capable of doing 4 colors can still show
 the 4 colors and given proper screen drivers, 'lesser mode' applications
 will still work without modification, just like they do now with the new
 drivers.

 I'm not completely sure whether I get this right, you're proposing
 some sort of the old MODE4-MODE8 switch? Well, for one I don't like
 that and hardware like the Qx0 do make a problem there because the 4
 colour mode is restricted to the poor 512x256 resolution.

No, the 'mode switch' would really only establish what kind of save area te
job needs and change the way a color definition as given is interpreted by
the driver - It would be a 'property' linked with the main application
window. There would also be a 'system variable' where the default for this
would be set if the job does not provide a value.
The idea is precisely not to have the jobs know they are running in a
particular mode unless they ask for it speciffically. This is a big plus
because even with your extensions to the PE, you will hardly tend to use a
large number of colors. If you define the system default to be 16 colors,
and use a real screen at 16BPP, programs that don't specify a mode will use
16 colors - if so desired out of a palette of whatever number of colors the
actual hardware provides. The color conversion routines take care that when
a job specifies a color, you get something close to it on the screen. If
your actual screen is not set for many colors or it simply can't do it, you
gat a stipple. The idea is to remove the need for programs to cater for
multiple screen modes which certainly does not help things right now. The
system palette is a huge help with this.
Now, don't get me wrong, I fully understand that this is a VERY tall order.
And it's not something to do _right_now_ but 

Re: [ql-users] The future of SMSQ/E

2002-03-13 Thread ZN

On 3/14/02 at 3:29 AM Marcel Kilgus wrote:

ZN wrote: 

 manipulations aren't really much slower than their 2bit equivalent
 ... The problem comes when big chunks of memory need to be altered

 That's precisely what I meant. Scrolling must be the biggest issue,

 On QPC this is actually faster than (or at least as fast as) the 2bit
 routines due to the acceleration.

Certainly, alas with other hardware there is no such luxury, but this is
not a huge problem.

 but in an environment with background update, this is an issue one
 cannot really solve (just thinking in advance here).

 I don't completely get that comment.

If programs start writing into their save areas always and onto the screen
only when they are on top of the job stack, the amount of manipulations
increases dramatically. An approach where save areas are kept at optimum
BPP depth would help this - but the next step would be to provide an update
of the partially burried windows and this is where we run into problems
again.

A little explanation (I'm sure nothing new for Marcel!) here:

In systems where the job itself is responsible for updating a window, it
knows when something has really been changed in it (and if it's clever
enough, where) and updates the visible part. Alas, WMAN is not such a
system.
In a system like WMAN where a bitmap (essentially = save area) is used per
job, there has to be a job or a task (I would favour the latter and I'll
explain why below) that periodically 'combs' through all the save areas of
all partially visible windows (the top job window refresh is, or rather,
may be, treated differently) and copies them onto the actual screen. If
save areas are alowed to have a format different than the actual screen
layout on given hardware, this refresh job also has to do color conversion
from save area format to actual screen format 'on fly', and it gets to know
how by reading the relevant structures in window/job definition blocks
(this here is a prime candidate for acceleration of some sort). It would be
possible (but not strictly necessary, it could mean a LOT of added work) to
include a mechanism where this 'refresh' task can read a 'changed' status
of some sort for a save area or window within it, so it skips parts that
have not changed. This is where we get into quite a lot of manipulations
again, but fortunately, the total maximum number of pixels manipulated is
equal to the screen resolution, and all of this does not have to occur
'real time' so the actual load to the system is limited.

Now, although my point is made, I'd like to explain why I keep bringing up
variable format save areas, and more specifically drivers that operate on
save areas rather than / in addition to the actual screen.
Well, WMAN with it's save area concept is one step away from a full
'virtual screen' setup. When drivers write into save areas, the actual
screen resolution, organization and size, become less relevant:
1) There is no reason for a save area to have a size limited to the screen
size - manipulating it would require changes to the GUI, not to the
application - but of course, the application can be made aware of this
possibility and alowed to handle it.
2) There is no reason for a save area to have the same pixel organization
as the screen - this immensely helps with compatibility across platforms,
and has another interesting consequence: a save area can have an
organization that no available screen hardware implements (yet)
All of this is subject to:
a) available memory (here is where optimized save areas can help a lot)
b) availability of a driver for a given save area format (we already have
4, 8, 32, 33) - applications can be alowed to handle some or all operations
where necessary (for instance, where an application wants to internally run
in 24BPP)
c) ability of the system to transalte from save area format to actual
screen format with usable ('real time') responsiveness - again, save area
formats can help here, and again, there may be a mechanism for applications
to provide their own 'transaltors'.

In theory, NO program should need to write to he actual screen RAM at all -
as long as there is a documented data structure that keeps track of where
the 'virtual screen' (read fancy save area) is and what format it has
(presumeably one that the screen refresh task understands). It would be the
screen refresh task's... well, task, to bring the contents of the 'virtual
screens' to the real one so they can be viewed, with whatever rules one
cares to impose based on the visibility of a job's window (for instance,
the top job gets most attention, partially visible jobs are refreshed less
often). This also has some interesting repercussions on a GUI overlay, if
one is used.

The validity of this concept has already been demonstrated, on the QXL
(sadly it's slow due to hardware limitations) and on QPC - albeit on one
'virtual screen' - namely the emulation of the real one. From the
standpoint of the host machine, the QL screen RAM

Re: [ql-users] The future of SMSQ/E

2002-03-12 Thread ZN

On 3/12/02 at 3:44 AM Marcel Kilgus wrote:

 After the Hove show some of us went to a pub and discussed a bit about
 the future of the QL. On the drive home I talked some more with Jochen
 and in the end we decided to take the development of SMSQ/E into our
 (well, probably mainly my) hands...

[Ideas]

 WMAN still can't use the extended colours. Fortunately all WMAN routines
 and data blocks related to colour use 16 bit wide values, the upper 8
 bits are just never used. Therefore I defined a new colour format:

%   handle colour exactly like before
%0001   use lower byte as palette index
%0010   use lower byte as system palette index
%0011   use lower byte as gray scale value
%1rgb   15 bit RGB value

 The system palette is an idea found on other operating systems to give
 applications a common look.

The system palette is an excellent idea, the minor question is gow to
define it initially (defauklt values). I have my doubts about the gray
scale value palette. Perhaps a fixed color definition could be a good idea
instead, and if so, use something like the Aurora color set. It's simple
and all systems capable of 256 colors can reproduce it. That way programs
can have a unified 'shorthand' color table when needed.

Further ideas:

I have no problem with the GUI because it does not necesairly have anything
to do with WMAN. I do agree that the look is way behind the times, and the
biggest problem here is again documentation (as in lack of) that would
point to a way to change it. OTOH, a relatively limited number of
concurrent colors used in the GUI has it's advantages. I can think of MANY
things that would be on my 'most wanted list' way before a nice multicolour
3D GUI. 'Under' the GUI there has to be solid substance.

The matter of window saves using the deepest color definition regardless of
how many colors are actually used is one of the more serious, and probably
very complex issues. IMHO the MODE command still needs to have effect in
high color screen mode. Actually, ideally, the 'screen' moda nd the
'application' mode should be separate, especially now that there is
hardware with screen modes capable of concurrently showing all lesser
modes. As discussed in aprevious mail, a program using 4 colors (even if
they are from a palette of 65536) using 16BPP save areas is quite absurd,
especially as the VAST majority of programs really don't need more than a
couple simultaneous colors. I believe that the solution to this problem
also includes the solution to programs continuing to produce screen output
even when burried.

Slave blocks are a big problem. As far as I can understand, they really
need to stay in some form - with a limited number as the linear table
search is what really slows things down so imensely. Also, the real problem
in limiting the slave blocks is that the table start and length are
effectively 'hardcoded' because they are derived from pointers to other
structures. I wonder if an alternative way of limiting the number of slave
blocks would be to attack the basic area movement rules. If basic couldbe
moved along with the top of the common heap and not necesairly only when it
changes size. Although, I have a feeling that this will expose more
'hardwired' pointers :-(

Nasta




Re: [ql-users] The future of SMSQ/E

2002-03-12 Thread ZN

 I have my doubts about the gray scale value palette.

 Yes, it's a bit superfluous but as it's next to no work for me to
 implement I just thought go for it

Greyscale is actually useful. There are many cases where someone may be 
using a mono LCD panel that supports 256 grey levels.

Not really a good argument, I'm afraid. You find me a monochrome LCD
either:
a) supporting gray scale natively (not just where it says so in the
datasheet abstract!) or
b) being able to accurately show more than 16 levels of gray
and I'll concede otherwise. Just remember that once uppon a time I made LCD
controller for the QL.
Monochrome display panels are natively just that - monchrome. Gray scale
has to be emulated using FRM (Frame Rate Modulation) and/or dither. The
first works OK but only for a very low number of grays, the controller
design was mode 4 only and that was still OK. 8 would already be pushing
it.

Also, greyscale can't be beaten if you're doing mono document editing.

True. OTOH we don't have hardware capable of producing 256 levels of gray
currently (unless someone adds a new mode that would work only for QXL and
QPC) since it really implies 24 bit graphics. Now if we take into account
Marcel's good point that the ability to specify 15 bit color already
eliminates another 8-bit fixed palette, more or less the same is valid with
monochrome, as it is also a kind of fixed palette.
Unlike the system palette which is an index into a system table or the
'standard' color which also specifies stippling, more or less everything
else is covered by the 15 bit specification. Granted, Q40/60 would gain
another bit of resolution with a monochrome palette (which would
unfortunately still leave two bits unimplemented) the question being how
often monochrome is actually being used, or I should say, will beused.
However, since Marcel says it's easy to implement... I suppose one never
knows.

 The system palette is an excellent idea, the minor question is how to
 define it initially (default values).

 I'm planning to add them as a standard configuration block.

Well, you certainly have my blessings :-)
Hopefully there will be a nice utility some day to set them up while the
system is running...

 Perhaps a fixed color definition could be a good idea instead, and
 if so, use something like the Aurora color set. It's simple and all
 systems capable of 256 colors can reproduce it. That way programs
 can have a unified 'shorthand' color table when needed.

 Hmm, on the one hand there's already the normal palette mode which is
 well defined and I think it's unlikely that the user changes it.

Which one would that be? i hope we are not talking about the 256 color
system palette imposed by the PC? Or am i missing something?

 You think another 8bit fixed format would be of any good?

Actually, you are right, but I was thinking on my feet really. The proper
way to address this would be to encourage certain combinations of 15BPP
values and discourage others, as defaults. Obviously colors ommonly found
in mode 4, 8 are the ines to use, and (although not supported currently)
16, 256 definitions from Aurora - and finally 15BPP.

 The matter of window saves using the deepest color definition regardless
of
 how many colors are actually used is one of the more serious, and
probably
 very complex issues.

 Yes, I thought a lot about it and it is very complex. My conclusion is
 that the goal is not to have applications that only use 4 colours so
 that this isn't a disadvantage anymore. This is neither nice nor elegant
 but pragmatical.

If I understand it correctly, that is the same pragmatical decision used
for the high color drivers as they are. IMHO this might prove to be a
serious problem in the future given the philosophy of WMAN - simply because
the save areas increase 8-fold, the consequence of which is the speed of
their manipulation decreases by the same factor. It really undermines one
of it's best qualities: speed/simplicity. Just don't get me wrong, I don't
think this is something that needs immediate attention, I'd surely put
slave blocks on the top of that list, but let us not create another version
of the 36 character name+path limit.

 IMHO the MODE command still needs to have effect in high color screen
 mode. Actually, ideally, the 'screen' moda nd the 'application' mode
 should be separate, especially now that there is hardware with screen
 modes capable of concurrently showing all lesser modes.

 I don't quite understand, which hardware can do this?

What I meant is that hardware capable of doing 4 colors can still show the
4 colors and given proper screen drivers, 'lesser mode' applications will
still work without modification, just like they do now with the new
drivers. Apps writen for mode 4 work in 32/33 simply because the drivers
translate for them. the only thing that does not work is where there is
direct screen access, for obvious reasons, but this is really the exception
that cannot completely be catered for 

Re: [ql-users] Hove Workshop

2002-03-05 Thread ZN

 OK, guess you can take the love affir with black
 QL things a bit far perhaps ;-))

LG do a black CD ROM Drive you know.

And Phoebus can get you one for about $50 plus shipping, I'm sure he knows
that I know where he would be getting them too :-) I actually have one in
my PC. 8/4/25 rewriter - lots of bang for the $

Nasta




Re: [ql-users] Sir Clive was really right :-)

2002-03-01 Thread ZN

On 3/1/02 at 2:52 PM Dave wrote:

[QL membranes]

No, this has to be a broken one, because I need to destroy it to measure
it The internal measurements are vital

Why? A contact resistance of anything up to 200-300 Ohm will still work OK
(but if the keyboard is on long wires will be uneliable due to transmission
line effects) The membrane is transparent and the connections are very
easy to trace In fact, I think you can deduce them from the 'KEYROW'
function description in the original QL manual, it's basically an 8x8
matrix The extra lines are for the shift, control and alt keys, they only
have an extra diode to connect to the IPC lines
If you are looking for the spec on spacing and membrane bubble size, it's
not too relevant Normally, for membranes it's an interplay between spacing
and the rubber 'springs' under the keys In the case of the QL even huge
spacings will work because the rubber is so hard that you have to really
push on the key, so once it 'gives' and gets in contact with the membrane,
the force is sufficient to make contact on any membrane I have ever seen
(and even on some TACT switches!)
The bubbles are about 13mm in diameter and the spacing is, IIRC 027mm
those neurons are about to be reused so don't hold me accountable if I'm
wrong Several of the bubbles in a single row are connected together by
virtue of narrow cutouts on the spacer - 5 or 6, usually This reduces
pressure buildup in the bubble, when it's pressed into
Samsung membranes were separable, had no connections between the 'bubbles',
were made of clear material, and had thick silver lines you could even
solder to with some attention, and the contact points had small graphite
circles printed on them
Very erly sinclair membranes were very similar to the Samsung ones, just
not separable and without the graphite pads
The largest majority of Sinclar membranes were glued together (with a glue
that chemically interacted with the signal lines), were made of milky
white material that resembles drafting transparency, had very thin
silver-on-thallium plating that would easily corrode (soldering is
completely out of the question here), and would eventually go brittle with
heat and crack Essentially the same as with the Spectrum
From a technical standpoint, Spectrum+ membranes are the most interesting -
these are two sandwiched membranes, the top one would simulate the required
'shift' key being pressed for the extra keys on the Spectrum+ keyboard :-)

Nasta




[ql-users] Re: OT Confirm files received

2002-03-01 Thread ZN

On 3/1/02 at 4:36 PM Alexander Keith Mitchell wrote:

 Just a note to Nasta to confirm that I have sent the files he wanted

Confirmed, thanks!

Sorry to send it this way but there have been some problems with email
between Keith and me so this is the only method that we know works for
sure

Nasta




Re: [ql-users] SuperQ Board

2002-02-27 Thread ZN

On 2/27/02 at 7:50 PM Dave wrote:

There were two types of mouse interface...

One was a 9 pin serial mouse, and the other was a DIN socket that also
used a serial mouse, but I don't have the pinout. I recall most were
the 9 pin connectors.

Are you sure? What was used to receive the serial protocol?
I'm asking because I have seen the SQB schematic (long time ago) but don't
remember anything like this. Are you sure they were not of the 'bus mouse'
type, like used on QIMI?

Nasta




Re: [ql-users] OT: Re: PIC/SCR Compression

2002-02-21 Thread ZN

On 2/21/02 at 10:38 PM P Witte wrote:

 The PE used to handle programs in mode 8 and 4 so that all windows
 of the same mode would be displayed together.
 I wonder if this concept is still used, or indeed, if it could be
expanded.
 In particular, give programs the ability to declare a 'user' mode.

 Why not just use iow.xtop? It simply works as a referee. You can still
 write to the screen any way you please.

AFAIK you cannot poke cotrol registers of the hardware to change into modes
that the OS does not support yet.

Nasta




Re: PIC/SCR Compression (Was:Re: [ql-users] DISP_COLOUR)

2002-02-20 Thread ZN

On 2/19/02 at 10:29 PM Phoebus R. Dokos wrote:

 Two screen areas to do double buffering (SCR0 and SCR1) and Auroras
 have 3

Actually, they don't. They only have the same two...

My bad... however that's what happens when you DON'T HAVE THE
MANUL (hehehe get the hint ';-)))

Well, you have the manual where it says it does not have 3 :-)))

 (You hear that Nasta? Put it in the pipeline for Aurora II).

 It's already there. Along with on-fly 24 to 16 bit color conversion, and
 maybe some other goodies.

With masking out the excessive bits I presume???

Yes. Maybe even more. It's all just a collection of notes right ow but I'm
thinking about a couple of things. Hardware conversion from 1, 2, 3, 4, 8,
16 and 24 bit formats into the actual 16-bit format would be provided, and
perhaps some masking. It wouldn't be a true blitter as a full complement of
logic operations, and especially a barrel shifter, would just be too much
to fit into a cheap enough programmable logic chip. I'm thinking about some
tricks using tables in extra RAM, though.

BTW: Which bit organization will you maintain? QPC or Qx0 style (or PC 
style?) (Or something else?)

OK, the idea is to have the frame buffer internally always in 16-bit,
probably Q40/60 format. The frame buffer is 4M. However, it is accessed
through several memory addresses, and depending on which is used, looks
differently, but also provides automatic conversion. One address area
converts 2-bit QL mode 4 into native 16-bit, the next Aurora 4-bit, then
Aurora 8-bit, then there is one where tha actual native 16-bit per pixel
memory is accessed directly, and finally, there is an area for 24-bit.
Using the GF memory shadow option (and sacrificing 16M of RAM out of 128M)
it would even be possible to have ALL modes supported at once, though it
would require some added tricks (refresh of the screen) when the mode is
changed. As for which kind of native 16-bit is used - there will be a bit
in a control register to select wether it's Q40/60 or QPC/QXL, this is
quite easy to do.

Well how about some extra logic (kinda of a graphics co-pro) to allow for 
3D and other goodies (Hey it's not in production yet and I am allowed to 
dream am I not? :-)))

That would truly be a dream as it would require me to grow two more heads,
4 more arms and come up with several more hours each day, not to mention a
trust fund to live off while I do all the work with the extra heads and
arms during the extra hours:-))
However... there will be a place for an Analog Devices DSP on board. It
will connect to the digital audio interface header on the GF so it's serial
ports can use the audio chip as an AD and DA converter (for MP3 and such
:-) ), but since it's host port will also be available as a peripheral,
given some clever programming, it could certainly do a lot of graphics
related stuff (DCT/IDCT for Jpeg, vector and matrix operations for 3D...).
Again, puting the chip on there is easy, the programming is the real
stumbling block.

Nasta




Re: [ql-users] OT: Re: PIC/SCR Compression

2002-02-20 Thread ZN

On 2/20/02 at 4:27 AM Marcel Kilgus wrote:

 I can't really answer this without revealing what I'm working on ;-),
 but anyway there is a general problem: I haven't heard anything from
 Tony for a long time now, it almost seems that SMSQ/E for QPC is
 currently the only platform that's still supported. E.g. although my
 new PE 16bit window shadow code is quite easy to adapt to the Q40,
 without Tony it will never get integrated there.

Which again makes me wonder, what kind of an offer does one have to come up
with for Tony to make SMSQ/E open source? Or at least maintainble by
someone else...

Anyway, having digested what you said about GD2, the conversion of
everything to 16 bit is a serious problem (I could use a stronger term).
Even applications that use 16-bit color may have large parts of the user
interface (menus, borders...) that truly do not need 16 bits per pixel, and
many applications do not need it at all. Handling color in a 'largest
common denominator' way certainly results in memory waste. IMHO the proper
way to do this would be programs in lower modes to still have save areas in
that mode.
Let me give an example: A program essentially uses mode 4, but now under
GD2 they can be chosen from a palette of 65536. All the programs graphic
structures are really still in mode 4.
When the program's window is on the top of the job stack, the driver can
write directly to the screen, in which case it converts 'on fly' from 2 BPP
graphical objects using the defined palette, to 16 BPP. 
At this point two approaches are possible:

a) When the window gets burried, a 'reverse palette' conversion into a mode
4 2 BPP save area is performed (This can actually be quite slow). When it
gets picked, a 'palette' conversion from the mode 4 save area is performed
into the actual screen mode.

b) When the driver writes to the screen in the mode the screen uses (16BPP)
using on-fly conversion, it also writes to the save area in the mode the
program uses (in this case mode 4). AFAIK the routies to handle all
hardware supported modes are already there so finding the code that does
this should not be a problem, the problem is making it execute both
versions for one call. The difference is that a save area is automatically
required when the program is running, regardless wether it's window is
burried or not. The side-effect is that no save area update is necessary
once the program window is burried (in fact, the drivers could happily keep
writing into the save area using just mode 4). When the program gets picked
again, the save area is palette converted to 16BPP.

Approach b) requires one more save area at all times, but guarantees that
programs internally using lesser modes only use as much memory for the save
areas as they really need. Considering that's a 8-fold saving in most cases
it must be worth thinking about?
 
Another thing occurs to me, given the recent discussions on game toolkits.
The PE used to handle programs in mode 8 and 4 so that all windows of the
same mode would be displayed together. When the cursor would get over an
area used by a burried window that is in a mode different than the current
mode of the screen, the cursor would show 4 or 8. If that window was
picked, the mode was changed and all the windows using a different mode
removed.
I wonder if this concept is still used, or indeed, if it could be expanded.
In particular, give programs the ability to declare a 'user' mode.  The
point would be to use this for programs that need control of the screen
hardware but do not necessairly use the drivers. The standard mode change,
screen clear and restore behaviour would be used to prevent such a
'hardware direct' program from corrupting the display by writing out of
turn, and other programs from getting to run with the wrong mode selected.
A program that would declare user mode would in fact also declare which
hardware mode it is using, but the 'user' flag would insure that it gets
exclusive use of the display when picked (as if it implied a guardian
window the size of the whole screen), no matter if other programs may be
using a compatible hardware mode and could in theory be displayed as
burried windows. Along with handling games, this would enable programs to
support hardware modes that are not yet supported in the screen driver.

Nasta




Re: [ql-users] Re: sprites

2002-02-19 Thread ZN

On 2/19/02 at 8:12 PM Marcel Kilgus wrote:

[no way to prefent flicker]

Joachim Van der Auwera wrote: 
 Oh yes there is... You have to be more intelligent and shift data
 in your buffer, restoring what is uncovered and saving what is newly
 covered. More difficult to write, but always gives good results,
 independent of execution speed.

Helps, but doesn't solve the problem (I'm talking theoretically here,
possibly with enough effort the effects can be neglected but they are
there!).

Precisely, Marcel is absolutely right.
One should also make a difference between flicker and what is called 'frame
shear'.
Here is an example:
a) 16 sprites to draw, by restoring the background then redrawing all 16
sprites in a new place. Effectively, in a short interval of time, there are
no sprites on the screen.
Suppose that this is what happens:
As the screen is traced from top to bottom, the background is first
redrawn. If this is fast enough, at some point in the entire screen scan,
say right in the middle, the entire background will be restored, and the
top half of the screen will show exactly that - no sprites. Next, the
sprites are redrawn, however the scanning of the screen continues
asynchronously, and the new sprites will not be seen in the top half untill
that part is rescanned. You get sprites shown only in the lower half! Now,
depending on how fast the drawing process is in comparison with the
scanning, you can get all sorts of effects, from flicker, 'banding'
horizontal or even slanted bands on the screen where the sprites or the
background is visible only), to areas of the screen where the sprites
remain invisible - and everything between. It would also change dynamically
depending on what else is going on in the system.
b) 16 sprites to draw, by precalculation of new positions from background
in memory, and updating only the parts of the screen that need to be
updated. In this case, as the scan goes from top to bottom, there will bea
point up to which the screen will show the 'old' position, and from there
on, a point where it will show the new position. If the movement of the
sprites is very fast, several updates will occur as the scan of the screen
progresses. This is called frame shear, where the diplayed frame frequencey
and the update frequency are not the same. The effect is quite common and
is actually preferable to the version a) although you can get 'streaking'
effects if a sprite moves downn exactly as fast as the scan progresses.
Given current hardware (except perhaps the Q40/Q60) this is unlikely.

The only way to completely avoid flicker is a screen update that is
synchronous with the screen scanning rate, which is usually done using an
interrupt that occurs on every vertical retrace (when the screen scan
finishes scanning the last - bottom right - pixel). Only original QL
hardware can do something like that because the poll interrupt is
equivalent to the vertical synch signal, which is a signal generated to
tell the monitor to restart tracing of the screen from top left again. As
far as I know, no other QL hardware does this. It may be possible (but is
probably unduly complex) for QPC to implement tripple buffering, in which
case there would not be frame sheer but frame drop or frame jitter effects,
which are sometimes preferable but the implementation comes at a very high
price.

The SSG works as a task linked into the poll interrupt list, controlled
through structures in memory that are set up by the basic keywords or calls
to the relevant routines in the extension code. If the interrupt driven
code is executed fast enough, it will update the screen quickly enough so
that the scan will not overtake the update. The scan will therefore always
show the current situation, with no frame sheer or flicker or any other
artifact. IIRC (it was a long time ago...) the SSG also handles what needs
to be updated, transparency, and collision detection using sprite masks.
These are one-bit deep structures that define colision boundaries and which
sprite pixels should be visible. Also, IIRC the SSG has several modes of
operation, and one uses a background reference 'snapshot' to calculate
which parts of the screen need to be modified. The modification is
performed by pre-calculation and only writing to the actual screen RAM.

Something like this would indeed be a nice addition to the screen driver
but also a seizable complication once the PE is taken into account. OTOH
the PE window save already provides the background reference.

While at the subject of the PE - I was always under the impression that:
1) the PE saves the image of the application window, not a background (i.e.
what is under the window), except where saving what is under the menu
panes.
2) there was a mechanism that alowed jobs to provid for their own window
maintaining regardless of the background save mechanism
3) a way of signalling the job to refresh it's window in RAM was absent.

With clever use of the first two, it can be assured 

Re: [ql-users] IDE switch

2002-02-19 Thread ZN

On 2/19/02 at 10:28 PM P Witte wrote:

However, all I want is an extra connector on the cable and a switch to
 toggle power to either the one device or the other, ie nothing worth
 $39 + pp and a big box.

You may not want it, but it's the only correct way to do it. Switching the
power supply will eventually do damage to either of the devices and/or the
Qubide (probably the latter) if it works at all. IDE devices are not
hot-swap, in other words, input signals should not be present when the
device is not powered up. Before you point it out: yes, there were a couple
of products that actually did this (usually 5.25 removable drive bays).
Ask yourself why you can't buy them any more...

It is possible to only implement a simple select switch but not just by
adding a connector to the cable - two pins need to be switched and pulled
up to 5V when out of coircuit for both devices that are switch selectable.
Even so, it would not work unless the system is powered down (merely
resetting may not be enough) and the software detected new devices.

Nasta




Re: [ql-users] OT: Re: PIC/SCR Compression

2002-02-19 Thread ZN

On 2/20/02 at 3:48 AM Marcel Kilgus wrote:

Phoebus R. Dokos wrote: 
 I see you are still UP!

Usually I don't go to bed before 3am... at least not if I can avoid
it.

Aha, a kindred spirit, although I've been limiting myself to 2AM lately :-)

 Presumably working on QPC v. 5 with Sound, Sprites, Nice CLI and a
 decent filesystem?

As a matter of fact this is not such a bad guess. :-) You'll see...

Hopefully it will filter down to other QL variants...

Nasta






Re: PIC/SCR Compression (Was:Re: [ql-users] DISP_COLOUR)

2002-02-19 Thread ZN

 Well, there is just no way to avoid flickering. On fast machines the
 possibility that flicker appears gets smaller but it's never zero.

 I slightly disagree. Not much, just a little bit.
 simply make the program aware of that interrupt...

No interrupt. Well, actually, yes - only on original QL. And two screen
areas to do double buffering.

(SCR0 and SCR1) and Auroras have 3

Actually, they don't. They only have the same two. The third area is really
a collective area which holds SCR0, SCR1 and the rest of the bitmap up to
1024x960 in mode 4. In it, SCR0 and SCR1 are actually side by side at the
top! I originally thought of using the screen flip bit to implement
swapping of the two screen halves (top and bottom) in the Aurora high-res
mode. That way, any MODE 4 or 8 resolution below 480 vertical could become
two screens but I disabled that option in the production version, because
it does not have much use unless it does something for 256 and 16 colors,
and it doesn't - in those two modes the screen would be limited to 240
vertically, and there is no such mode on the QL. In theory, this could
actually be extended to 256 and the flip function could be reintroduced but
it would hardly be usefull without a retrace interrupt.

(You hear that Nasta? Put it in the pipeline for Aurora II).

It's already there. Along with on-fly 24 to 16 bit color conversion, and
maybe some other goodies. 

 I don't know how easy it is to move the start of screen
 memory on the QL and derivitives (on the vanilla QL it's prolly
 impossible) but hey, no problem - we'll find a way ;)

Actually, it is ONLY possible on the original and Aurora (and possibly
Q40/60) when emulating original QL mode, but you only have a choice between
$2 and $28000.
By the looks of it, Aurora II will get 4M of video RAM so a movable screen
start address for multiple buffers could be a benefit. Still, do not expect
it to be moved completely freely, but we'll see.

Nasta




Re: [ql-users] Sprite movement and relative info.

2002-02-15 Thread ZN

On 2/15/02 at 8:00 PM Bill Waugh wrote:

 there is a slight possibility that it started with  SUPER
 Super Sprite Generator rings a bell ( with a picture of ET
 on bike ?)

Yes, that was it, Super Sprite Generator. Too bad it doesn't work on
anything but the original screen layout and addresses...

Nasta 





Re: [ql-users] CF Cards and Hot-swapable readers.

2002-02-10 Thread ZN

On 2/10/02 at 8:40 AM Robert Newson wrote:

 Instead, the writes are 'staggered' - a sector that is overwritten is
 actually declared empty and another is written instead, than that one
 is 'soft-remapped' to be at the same space as it's previous 'version'.

 Stupid question: how is the soft-remapping stored?
 I only ask as it seems obvious that this has to be constantly re-written 
 each time a sector's physical location changes - or am I thinking too
much 
 in terms of disk allocation maps

Not a stupid question at all, in fact that's the whole technology of making
Flash chips into CF cards.
The chips are not random write. Bits get written from 1 to 0, but you can
only erase (return to 1) a whole 'sector', which is usually 64k. Because of
this, logical sectors (512bytes) actually use more than that size on the
chip - just like on hard drives, there are extra bits for error detection
and correction, but in addition, there are link list structures for logical
sector remapping. These are usually encoded so that changes are always
written only by resetting bits. This way when the chip sector is erased
(reset to 1s), the links are also erased and reset with it, avoiding
unnecessary erasing. However, since the CF has no way of knowing what will
get erased when, especially when it becomes nearly full, there is usually
also a mechanism for relocating whole chip sectors (using the same
principle). The CF is in 'trouble' when it is almost completely full and
less than one whole chip sector needs to be erased. This is where spare
sectors come in - and yes, it does copy the sectors that need to be 'saved'
into the spare sector, erases the original sector, and then that one
becomes spare - and it also rotates spares so that one single sector never
gets used as a spare all the time.
As others suggested, yes, the data structures most probably get copied into
local RAM for easyer look-up, at power-up, but now we are getting into the
area of guesswork, one would have to look up a flash controller chip data
to see how it's done, and even then it would not be conclusive as many CF
card (or other flash based media) manufacturers use proprietary chips.
Another 'guesswork' option is so called 'Flash RAM'. There are two chip
makers that produce a static RAM which has a flash 'shadow'. This gets
written simultaneously, all bits at the same time, into the RAM on power
up, and written back into the Flash from the RAM on power down. The only
requirement is that there is a power supply capable of pesisting a few ms
after power down, very easily done. The down side is that these have a
small (but sufficient for this application) capacity, and are not cheap. In
today's ever diminishing price market, I doubt these would be very popular,
it's much more likely that a clever managing algorithm would be.

Nasta




Re: [ql-users] ms-users-and-smalltalk-list

2002-02-08 Thread ZN

May I point out that the number of 'is/isn't OT' and 'who reacted in what
way' posts is now approaching a good 30% share in the 80% of OT messages
and is just making things worse. In the past when the list would wander,
going back to QL topics without further bickering has been proven a great
way to put things back on track. I'm sure it will work this time too
(HINT!).

Nasta




RE: [ql-users] CF Cards and Hot-swapable readers.

2002-02-08 Thread ZN

On 2/8/02 at 2:45 PM Claude Mourier 00 wrote:

So a small (mobile ?) QL-compatible system with only solid state devices
would be feasable ?

Yes, that's what I'm using now, although since it's a regular Aurora,
Qubide and SGC it's not exactly small... but one out of two :-)

Nasta




Re: [ql-users] CF Cards and Hot-swapable readers.

2002-02-08 Thread ZN

On 2/8/02 at 9:19 PM P Witte wrote:

Nasta writes:

 So a small (mobile ?) QL-compatible system with only solid state
devices
 would be feasable ?

 Yes, that's what I'm using now, although since it's a regular Aurora,
 Qubide and SGC it's not exactly small... but one out of two :-)

I know youve sort of answered this before, but the one thing that gives me
cause for concern is the long-term reliablity of CF media. And how are we
going to notice - apart from in the most unpleasant way - that its natural
life is reaching its close?

CF cards use flash memory chips which can take at least 1 million writes to
the same location - and the internal controller does not let the bytes be
written to the same place if it can avoid it. Instead, the writes are
'staggered' - a sector that is overwritten is actually declared empty and
another is written instead, than that one is 'soft-remapped' to be at the
same space as it's previous 'version'. Previously written sectors are only
rewritten if it cannot be avoided due to running out of space. All of this
is actually quite easy to do on a block oriented device, unlike one with
purely random writes.
This in effect means that the CF will allways last at least 1 million
writes, and more probably, far more. For instance, if you only kept
rewriting one single 512 byte sector on a 32M card, the staggered write
principle will extend the life of the card to 1M * size_of_card /
size_of_one_sector, in this example to 64 billion writes!!! A hard drive
would be HARD pressed to get even close to that figure. Of course, reading
a CF is unlimited. This is quite different than hard disks, where both
reading and writing slightly reduce the remaining lifetime, primairly of
the mechanical parts.
Finally, writing flash memory AUTOMATICALLY implies verification, which
does not happen on hard disks. On hard disks, corrupt data recovery is
attempted (and not always successful!) on read. On a CF card, write errors
are immediately diagnosed and the sector is remapped. There are a few spare
sectors on the card. Once these run out, the card will return errors on a
write attempt and the user will immediately know that something is wrong.
Assuming there is an alternative place to store data, it will not be lost.
I suppose iy's obvious that a CF is much more sturdy than a hard drive, by
virtue of not having any moving parts, being very light weight and
generating next to no heat :-)

Nasta




Re: [ql-users] Surplus QL ED drives

2002-02-04 Thread ZN

On 2/4/02 at 12:26 PM Phoebus R. Dokos wrote:

Hi all,
PC Surplus online has a HUGE stock of Mitsubishi ED drives for $4.00 (Yes 
4!) a piece Check it out here

http://www.pcsurplusonline.com/cat.cfm?catid=3

Well, yes, there's just one problem: they are from IBM and have NO jumpers.
It still may be possible to get them to work on a GC/SGC, I'm looking into
it.
My efforts have been thwarted by an unlikely problem: now that I am using a
CF card instead of a 'ral' hard drive, the QL's power consumption is too
small - in particular, it uses next to no power from the 12V line, which is
confusing my power supply :-)

Nasta




Re: [ql-users] Surplus QL ED drives

2002-02-04 Thread ZN

On 2/4/02 at 12:37 PM Phoebus R. Dokos wrote:

 Hi all, PC Surplus online has a HUGE stock of Mitsubishi ED
 drives for $4.00 (Yes 4!)

 Well, yes, there's just one problem: they are from IBM

Did you call them? On the website it does say Mitsubishi (And
that's the one I used to have)

I did better than that, I have two of the drives right here. No ED disks
though, but I would love to get one. It's the simplest way to do 2.88M CD
boot images :-)

To answer a few questions posed on the list:
1) The drive is Mitsubishi MF-356F-822MB
2) It's also black and has the unusually long blue eject button (C) IBM
3) It has a regular 34 pin ICD connector for the controller and a regular
4-pin power sonnector (although they only use +5V power)
4) It has no obvious jumpers, though there are a few surface mount pads
that are unpopulated but not obviously used as solder jumpers (phleeze,
don't you think I'd know about these :-))) ) even when you open the case.
There is a four pin block labelled 'TPA' which is usually for 'Test Point
Array' which has no jumpers and is not accessible without opening the case.
5) The drive is VERY well made, not like today's 'recycled scrap metal'
ones, even though it's mad ein Thailand :-)

Nasta

PS
ED drives will work just fine with DD or HD provided the hardware drives
the appropriate density select pins on the drive, or the drive can be set
to 'divine' what density it's supossed to use based on the signal it's fed.
Often one of these requirements is not satisfied and as a result the drive
will either only read correctly and write to what it defaults to in
hardware (jumpers), or it will only read correctly and write to
anything that was formatted in the same drive.





Re: [ql-users] Surplus QL ED drives

2002-02-04 Thread ZN

On 2/4/02 at 10:27 PM Tony Firshman wrote:

On  Mon, 4 Feb 2002 at 17:19:30,  ZN wrote:

5) The drive is VERY well made, not like today's 'recycled scrap metal'
ones, even though it's mad ein Thailand :-)
That is nice---^

One mad German speaking underpaid worker?

No, it's the arthritic fingers :-) but that's old talk again :-)
Just kidding - it's typical on sticky keyboards to get the rollover wrong
and you get transposed letters, as in oTny or Nasat :-)

Nasat.. soryr, Nasta :-




Re: [ql-users] Launch of new web site

2002-02-02 Thread ZN

On 2/2/02 at 5:14 PM Marcel Kilgus wrote:

 Yes - I have seen that reaction too.
 Mind you, you are getting older now (8-)#

Did you have to say that? :)

Of course he did, he's even older :-)
(and before anyone states the obvious, so am I - but I'm still younger than
Tony :- )

Nasta




Re: [ql-users] Launch of new web site

2002-02-02 Thread ZN

On 2/2/02 at 9:20 PM Tony Firshman wrote:

 Mind you, you are getting older now (8-)#

Did you have to say that? :)

Of course he did, he's even older :-)

By a factor of more than two I am sure (8-(#

(and before anyone states the obvious, so am I - but I'm
 still younger than Tony :- )

Give yourself a few more years and you will all catch up

Oh, no, I'll always be younger :-)

 I feel younger right now than I did when I was in my thirties (8-)#
 Age is an attitude of mind, as long as the hardware does not crash.

Amen to that!

Nasta




Re: [ql-users] CF adapters QubIDE, Lexar CF Cards and superHermes

2002-01-25 Thread ZN

On 1/25/02 at 10:16 PM Tony Firshman wrote:

 If this is found to be True then to the previous score add:
 Nasta 1, TF 1, Lau 1, ATA protocol 0 (Funny football game isn't
 it?)

... and Phil Borman 2.

Actually, it would be very helpful if we could somehow contact Phil and get
him to release Qubide 2.0 as open source, or at least some parts of the
source to be modified for cases such as this. Some startup routines for the
Qubide should be modified to conform better with the revised ATA specs.

Nasta




Re: [ql-users] Another floating idea...

2002-01-16 Thread ZN

On 1/16/02 at 3:51 PM Dave wrote:

On Wed, 16 Jan 2002, ZN wrote:

 I KNOW! Unfortunately I also MUST do other things to a) eat b) not be
 tossed out of the country c) remain sane

Lots of people can help with a) and b), but it's too late for c) ;P

As for a) and b) after two years of added experience, once again I am
forced to conclude that the key to getting help is to help yourself. It
works for c) too, but it needs rephrasing:

c) slow down further deterioration of sanity (otherwise I'll end up being
the tri-state guy, can answer simple questions with yes, no, and ga-ga, the
latter accompanied by drooling)

Does that sound any better? :-)

Nasta





Re: [ql-users] Another floating idea...

2002-01-15 Thread ZN

On 1/16/02 at 12:29 AM Phoebus R. Dokos wrote:

Just one thing, the IDE (w/ CF on) is already designed and practically 
ready (as released sometime ago by Nasta) (called the QubIDE II) Maybe

you should work with him on the subject ;-)

Wy ahead of you there :-)

How about a Sound card instead of the IDE (I like the EtherQL idea a
lot... go on with it)?

Very difficult on regular QL hardware due to data transfer requirements,
and most importantky, interrupts. This is why it appears on the GF instead.

Even better how about designing an Q40/60 ISA to GF bus adapter? So any 
potential cards for the GF could be used by the Q60 (maybe even the second

processor if that is possible or the graphics part of the card)

Unlikely. Both conenct directly to the 68060 bus, NOT to the expansion bus
- and it MUST be a 68060, at full speed.

IIRC there was an QL bus to IDE adapter designed sometime (a long time)
ago for use with ST-506 MFM 8 bit ISA HDD controllers for the QL

I think you meant QL bus to ISA. It was not a general ISA adapter, only one
that would work with a 8-bit MFM or RLL controller.

Nasta




Re: [ql-users] Future of QL - Part: ERROR, arithmetic overflow !

2002-01-13 Thread ZN

On 1/13/02 at 2:26 PM Marcel Kilgus wrote:

Thierry Godefroy wrote: 
 Another way to use them wisely, is to limit the amount of total caching
 memory they can use (thus also limiting the amount of time needed to
 search among all existing slave blocks for your data...). IIRC Marcel
 uses this trick to speed things up under QPC.

I tried (by exploiting the Atari kernel whose fast memory support
already prohibits slave blocks in fast memory), but couldn't get it to
work. Not much joy in debugging there, so I trashed it.

Marcel,
if I recall correctly, the size of the slave block table and therefore the
number of slave blocks is established at SMSQDOS init time after the amount
of free RAM  is determined. Unfortunately, the neurons that held
information on how the actual SBT search is performed (what establishes the
end of the table) have been reused :-), but wouldn't it be possible to
manipulate the maximum number of slave blocks by manipulating the apparent
max RAM found when the table is first created?
I know that with the way the table works, limiting it's size also limits
the position of the block of RAM that can be used as slave blocks. IIRC,
they would be restricted to low adresses, which would then co-incide with
the bottom of the common heap, so many allocations there would eventually
completely prevent slaving, and IIRC this is a very undesirable condition
because some drivers cannot cope.
But maybe the common heap start could be moved up. If nothing else by
making a false allocation and then linking over it's entry so that it's not
seen any more and cannot be considered for de-allocating, but then changes
in the slave block table would declare this 'free' and available for
slaving. This could also be done at init time, making the trick invisible
to anything that happens after that.
The bottom of the common heap and the top of the chosen slave block area
size could also overlap somewhat, which in effect would mean that there
would be a guaranteed minimum of slave blocks available that would never be
turned into anything else by resource allocation.
With memory sizes what they are today, we could also just dedicate an
amount of memory for slaving only, no need for any overlaps. Overlaps would
only happen if the actual RAM was smaller or equal to the maximum slave
block limit chosen. This would make it much simpler to try out stuff in
environments with a lot of RAM (a lot in SMSQDOS terms). I guess that way
on QPC with enough RAM in the PC, we could finally see if programs can cope
with 512M :-) (shich seems to be the maximum...)

Nasta




Re: [ql-users] Q40/Q60/??? Ultra IO card

2002-01-11 Thread ZN

On 1/11/02 at 3:08 PM Tony Firshman wrote:

On Fri, 11 Jan 2002, Tony Firshman wrote:

 Arvid Borretsen (spelling probably worng) even suggested
 that one might make pcbs on high quality gloss paper with
 an inkjet printer. He even gave me some metal impregnated
 ink, but I never got round to trying it.

Well, believe it or not, there is a PCB manufacturing process that
essentially does exactly that, called HASP. It is used for selective
plating, fancy non-restricted burried via boards, as well as for some
flexible laminates. Essentially, conductive ink is applied onto a substrate
by a plotter (pen or ink-jet), and the tracks are then 'beefed up' to
standard thickness using electroplating. Many layers are possible and there
are no restrictions on even or odd number. The cost, of course, is
astronomical.

Ahhh, memories.

Nostalgia isn't what it used to be.

That's only because the brain cells don't work as well as they used to :-)
(Translation: Yes, you ARE old! :-) ).

Nasta




RE: Re: [ql-users] Making PCBs (was Q40/Q60/??? Ultra IO card)

2002-01-11 Thread ZN

On 1/11/02 at 12:44 PM [EMAIL PROTECTED] wrote:

What equipment do you use to apply the etch-resist pattern to the 
boards, etch them, and then bond them together?
Is it all very expensive for a hobbyist to produce one-off boards for 
experiments  prototypes?

The real problem is the thru-hole plating, the chemistry is 'interesting'
not to mention expensive. Apart from that, single or two layer non-plated
can be made quite cheaply and VERY precisely.
I used to do that a lot - and would still be doing it if only I could find
a source of highly concentrated hydrogen perxide and not immediately be
branded suspicios with terrorist overtones.
You can get blank boards that have the photo-resist pre-applied. Film can
be produced using drafting transparency and a laser printer. For very fine
geometry boards 600DPI is a must and you have to experiment because passing
the papaer through the printer changes the dimensions. Fortunately, most
PCB design programs have the capability to enter correction factors. I have
done hundreds of boards this way, some with tracks and spacings down to
8/1000 - more than good enough for even the smallest SMD components.
Laminating could be a problem, but it is not insurmountable, especially for
small boards. It is DIRTY though :-(

Nasta




Re: [ql-users] Q40/Q60/??? Ultra IO card

2002-01-11 Thread ZN

On 1/11/02 at 12:23 PM Tony Firshman wrote:

Last time I said this to Nasta, he said he was working on an 1800 x
something screen - (8-)#

I've learned two things early on:

1) Almost any kind of CAD requires all the pixels you can possible get.
2) One gets only one pair of eyes. Invest in thy Trinitron and thou shalt
not need glasses.

I'm atually using 2048x1536 on two 20 monitors (cheap - they both came
from auctions), when in PCB design mode, and 'only' 1600x1200 otherwise :-)
I know, I'm terrible.

I did my first 4 layer with Mplane, mainly for an inner 'ground plane'
and a power layer - ie only etch away around vias (layer connections)
and holes.  Often the ground plane is there simply to reduce noise, but
it makes it easy to provide the necessary higher current capacity and
avoid voltage drops etc.

Actually, with any even moderatly high frequency design, a ground plane is
an absolute must. Or rather, ONE plane (either power or ground) is an
absolute must.

The reason is not obvious. At high frequencies, 'resistance' to DC current
is the least problem. Current always passes in some sort of a loop, and
it's the area described by this loop that is the problem. The inductance of
a loop is proportional to area of loop squared. An inductor acts as a
resistor with resistance proportional to inductance times frequency. When
you draw out where the signal actualy goes, you may be surprised that the
size of the loop is so large that the inductance prevents any usable signal
passing through the lines at the frequency required - even if the DC
resistance is negligible - and that is really the least of the problems.
Overlapping loops act as transformer windings so signals in one loop induce
currents in others, and vice versa.
A ground plane uses the 'path of minimum resistance' rule. Since it is the
return path for all signals, and a 'plane', extending under all the lines,
the return signal will not just go 'anywhere' on the ground plane, but
rather exactly under the actual lines because this path forms a loop of
minimum area and therefore minimum inductance, and therefore the least
resistance at a high frequency.

However I found I needed to use the +5V layer for some signal lines as
there was simply no room on outer layers.

Oh, with proper attention to the ground plane, you could use nearly all of
the power layer for signal tracks. It's perfectly legal - if you know what
you are doing :-)

Nasta




Re: [ql-users] Making PCBs (was Q40/Q60/??? Ultra IO card)

2002-01-11 Thread ZN

On 1/11/02 at 6:59 PM Dexter wrote:

 I know Stuart Honeyball produced some commercial boards but printing
 direct onto special film.
 It apparently worked very well, but there is loss of quality.  No chance
 of .5mm romdisq type pitches by this method.

The loss of quality is because you need to reverse the image in the X and
Y planes.

Actually, you need to this even on large geometry PCBs because the etch
quality is better. Ultimately, there is difraction along the edges in the
photo resist itself, which is the biggest limit to the track width.

However, there is a method that uses transfer paper and actually transfers
the toner directly onto the copper, usually it involves an iron. This would
be much better if one could have a hot roller, for instance out of an old
photocopier. Unfortunately, it is the uniformity of toner deposition,
entirely a matter of the printer used, that limits the quality achieved
with this method.

Nasta




Re: [ql-users] Q40/Q60/??? Ultra IO card

2002-01-11 Thread ZN

On 1/11/02 at 6:23 PM Dexter wrote:

 Ahhh, memories.

 Nostalgia isn't what it used to be.

 That's only because the brain cells don't work as well as they used to
:-)
 (Translation: Yes, you ARE old! :-) ).

And you say this today, Jan 11 2002. My birthday. ;)
*mock scowl*

Happy birthday... and I never said I wasn't old (though not as old as some,
but older than others... stuff along the lines of: Confucius say: Flush
chain still swinging, toilet seat still warm :-)

Dave






Re: [ql-users] Q40/Q60/??? Ultra IO card

2002-01-10 Thread ZN

On 1/10/02 at 9:49 PM Dexter wrote:

On Thu, 10 Jan 2002, Peter Graf wrote:

 PGA. QFP seems obsolete, so only PGA and BGA are interesting. BGA is
very
 small, but only useful for volume production.

I'm well aware of the thermal advantages of PGA over QFP. The pins move
heat away to the PCB much more efficiently. I don't like BGA for two
reasons: prototyping is a b*tch, and the actual cost to me was $85 per
mount for the small run I did. That's why I so much dislike the dropping
of the SA-1100 (QFP) in favor of the SA-1110 (mBGA).

It is possible, even in amateur situations, but you need to design the PCB
for it, and it is NOT the most efficient design, effectively, you have to
put a through hole under every ball, then heat from the back. This makes
the use of smaller pitched BGA very difficult.

If you have any easier projects I could cut my teeth on, I'd be happy. I'm
not anything like the level of Nasta and yourself. I'm just well equipped.
:o)

See the ideas about a Q40/60 speciffic IO card... :-)

Nasta





Re: [ql-users] Q40/Q60/??? Ultra IO card

2002-01-10 Thread ZN

On 1/10/02 at 9:19 PM Peter Graf wrote:

Hi Nasta,

Two more plus a CF socket adds hot-swappable CF card capability...
...PC87C307, an AD1816, a 91C96 (or possibly 91C111?). That more or less
covers all the necessary ISA components.

Would it be worth considering to make a Q40/Q60 extension card with the IO
components you have in mind for XY-Fire (plus one or two goodies)?
The card wouldn't have much size constrainst, and almost no high-frequency
challenges, so placement and routing shouldn't be too tricky. The card
could be a playground to get familiar with the chips and write the
software.

Certainly it is worth considering - if you look back at some older
messages, you will find I already suggested something like it.
It really depends on the user interest, but I think it would be a good
project that could ultimately benefit all platforms, though at first, of
course, the Q40 and GF (or should I now say XY-F :-) ).

I myself would prefer to be the consulting party on such a design, offering
the existing implementation on the GF and anything I already designed as a
start - and then hoping to share experiences as the project goes along,
while concentrationg on GF.

As for the design and logistics...
If the parts were procured with both Q40/60 IO card and GF in mind, it
would make both cheaper as the quantities would be higher. While some
functions would not appear on the GF (such as IDE), they could be fully
implemented on the IO card.
Driver-wise, with some atention to detail, most of the hardware on this new
IO card that duplicates the hardware on a regular IO card would instantly
be usable by existing Q40/60 drivers. On the GF the addresses of some
registers would have to change, as well as the interrupt handling for some
devices.
The ethernet would most probably be usable 'almost instantly' under Q40/60
Linux, after register addresses are modified. Possibly even extended sound
could run 'almost instantly' under Linux if tehre are Linux drivers for an
AD1815/1816 chip, and it would not be a great problem to implement under
QMSQ/E for Q40/60, even with no changes to interrupt structure - and it
would be compatible to  because the audio chip can be configured to use
the required sample rate and word width.
Extras such as an integrated CF card slot would be VERY easy. Two IDE
channels also very easy (not so easy to find on ISA...). MIDI is already
supported by the sound chip (although on the GF external line
drivers/receivers are needed) and there are even extras such as I2S (should
anyone want SPDIF digital IO) and a DSP port. The PC87307 also has IRDA.
The card could be designed to use Q40/60 ISA environment (i.e. no DMA)
which is exacttly the way GF uses the chips too.

All in all, there are 3 problems:
1) Someone to implement it (design is largely done already - just
copy/modify the GF circuits)
2) Cost justification - presumeably from the extras on it
3) Where to put all the connectors for the multitude of features :-)

 Any chance to get USB (for digital still camera for instance, as even
 fast RS232 is very slow) ?

 Our QDOS/SMSQ software development shortage practically prohibits to
 work on USB host hardware. And even the hardware is a bit difficult
 because of the lack of suitable USB *host* chips.

Completely agreed there. Although, it would actually be easyer making the
Q40/60 an USB _peripheral_ to something else, the question being if that's
of any use. Certainly it would be on a SBC, to upload/download programs.

Also, the phrase 'suitable USB host chips' is really telling it better than
it is. The only way to get real USB would be to introduce a PCI bridge,
then hang one of the rapidly becoming obsolete USB adapter chips on it.
That would make USB a (huge problem)^2 - requiring the solution of PCI
_and_ USB support. Very unlikely. 

Oh yes - MOT, in their infinite 'wisdom' have ONE ColdFire chip with
integrated USB host - and it's the least compatible (read: useless) kind
:-(
When I looked at TT's ATAB, if that ColdFire were 68k compatible, it would
have been the IDEAL chip for the job :-

 I already have a *fast* hardware solution for digital cameras and
 Q40/Q60 (Much faster than USB!) Which is a CompactFlash adaptor.

Agreed - every camera I have seen (and even some very high-end ones) use
1.2M USB and with CCD sizes going into the 4M territory, and CF card sizes
increasing with prices falling, it takes forever to transfer the pictures.
OTOH reading directly from a CF is (almost) as fast as an IDE hard drive.

 Yes, except for the interrupt system which is very different. Depending
 on the required speed of the peripheral they are mapped to three
interrupt
 levels. The highest levels also have the shortest service routines, and
 service hardware that has FIFO buffering built-in.

 I also thought about it, but went back to the QL way of one level for
all,
 except sound. Mainly because I wasn't sure about the need for software
 changes.

Actually, the changes themselves are not 

Re: [ql-users] Q40/Q60/??? Ultra IO card

2002-01-10 Thread ZN

On 1/11/02 at 1:28 AM Dexter wrote:

 If you have any easier projects I could cut my teeth on,
 I'd be happy. I'm not anything like the level of Nasta
 and yourself. I'm just well equipped.

 See the ideas about a Q40/60 speciffic IO card... :-)

I see them. :o)
Ok, if I can get something together and have tech/design support from the
Q60 and Goldfire teams, I can do this.
I could do a straight simple ISA implementation for the Q60, and a
schematically similar mezz card for the Goldfire which has maximum
shared components.

No need for the mezzanine card, the required chips are found on the GF, in
fact, even that part of the PCB has been designed! Normally I would not be
againgst such a board, but as you know, the GF is intended to be a
semi-SBC.

 I have Eagle 4 Professional, which is half the tools for the job.

Half? Actually, that would be all the tools :-) Oh... you mean an
auto-router? I don't really believe in them :-) Well... actually, I do, but
I simply can't afford one that could do a board better than I can - and
ultimately, in the same time! For instance: GF IO part of the PCB requires
6 layers to route (I won't even tell you the number of vias it generates
:-((( ), hand routing takes the same time, and needs 4 layers and about
0.1x the number of vias. It took the same only because I had to rip up most
of what the autorouter did in order to get 100% routed and redo it by hand
- the autorouter stuff was mostly in the way!

 Does anyone think it might be appropriate to have a separate list
 for ql-developers to separate the traffic from ql-users, or are
 you happy for these kind of posts filling your list? :o)

He he - and there is one. I though I was on it too. Unfortunately, I lost
the subscription info... help, anyone? I could also cross-subscrive
QLhardware, so it automatically keeps a message archive.

Nasta




Re: [ql-users] Q40/Q60/??? Ultra IO card

2002-01-10 Thread ZN

On 1/11/02 at 4:17 AM Dexter wrote:

On Thu, 10 Jan 2002, ZN wrote:

 No need for the mezzanine card, the required chips are found on the GF,
 in fact, even that part of the PCB has been designed! Normally I would
 not be againgst such a board, but as you know, the GF is intended to be
a
 semi-SBC.

So all that needs doing is to get that section as a schematic and do
the board. :o)

More or less... well, more. The GF does not have IDE, so a few additional
bits will be needed. Also, I think I am using the IDE decode of the chip
for something else, I haven't had a look at that part of the schematic in a
while :-) but I think that 'else' is the processor and interrupt controller
which will not be needed on the IO card version so there should not be any
problem.

The schematic WILL have to be modified, and several decisions will have to
be made:

1) Will this thing ever be plugged into the real ISA (IMHO, hopefully not!
The PC87307 is made for motherboard apps, although they do have an ISA card
implementation, usefull to look into for this project). This question can
be reformulated as: do we, and in what extent, have the ability to change
the existing code to simplify the design or add new features? What needs to
be ABSOLUTELY compatible with the existing card? One problem: PC87307 is a
PnP device (fortunately, vastly simplified because it's intended for a
motherboard). I am not sure but it may be possible to circumvent outright
software PnP (although I have described the process in detail in the GF
docs) by using a serial EEPROM. I have not looked in depth into this
because the GF requires new initialisation code to be written anyway, so
PnP can be part of that (and this also saves the need for an EEPROM).

2) How far does the design move from the GF implementation to an ISA (even
if only Q40/60 ISA) implementation. This impacts interrupt line routing and
base address selection (which in turn impacts the PnP initialization
values), 8/16 bit bus width selection for ethernet (probably this will be
16 bit), local interrupt routing between UltraIO, Sound, Ethernet and
whatever else gets on the board. All of this also impacts Q40/60 linux
drivers if it deviates too much from regular ISA cards, but then, knowing
the Q40/60 implementation of ISA and designing speciffically for it (see 1
above) might make things easyer both for Q40 and GF, even though some
existing Q40/60 Linux drivers may need slight modifications (It seems to me
the later should not be a problem, but I'm not involved with any aspect of
Q40/60 Linux so I simply can't be sure).

3) What additional stuff we want on this board - IDE, drivers for MIDI, CF
card...

4) Existing design decisions and peculiarities of implementation.
Example: the sound chip wants to be Windows Sound System compatible, along
with a 'native' AD1816 mode. On GF the WSS is ignored because the native
mode is both simpler and more flexible, and unlike WSS has access to ALL
the chips considerable features (since we have to pay for them once we buy
the chip, we might as well use them!).

5) Feedback to original GF design - decisions made in the design of the IO
board might override some of the ones made for the design of GF IO (a
'let's meet in the middle' effort) if it simplifies matters with making the
drivers more uniform across platforms. I don't have a lot of leeway, but I
can certainly try to do what I can!

 I have Eagle 4 Professional, which is half the tools for the job.
 ...It does pretty good with 4-layers. But then, it is a $1200 program.

Care to take up the challenge of making it a two-layer board? It would
certainly make it MUCH cheaper... However, 4 layers make it MUCH easyer to
route, and may also make the whole thing smaller, which would certainly be
of interest to some people :-)

The GF board is certainly going to be the most peculiar one I have ever
designed. With the surface mount chips I am using, the largest portion of
their pins are either power, ground or not connected. I have calculated
that the regular setup where power and ground layers are in the middle
would therefore require more vias than the not so often seen version with
ground and power on the outside. The vias and other thru-holes take a huge
amount of space - one via is larger than a CPLD pin, for instance, even for
the smalles still financially viable via size. Also, with small pitch
surface mount chips, the spaces between pins are too small to pass a line
between them, so effectively the relatively large area taken by the chips
is 'in the way' of the tracks. This is why the overwhelming number of
tracks is routed in the middle two layers. Allmost all surface mount
passives are on the bottom layer, which is also the power layer, pads for
the components are 'islands' in the power plane. The top layer is ground,
and it's also used as a cooling surface, since it's a large exposed copper
plane. Saves costs on power supply heatsinks too :-)

Could you see it in your heart to get that specific

Re: [ql-users] Q40/Q60/??? Ultra IO card

2002-01-09 Thread ZN

On 1/9/02 at 11:22 PM Dexter wrote:

 If it had MultiIO/Ethernet as well, it seems to me that such a card
would
 be a very good preparation for your XY-Fire!

This has me thinking. In what ways does the Q60 ISA implimentation differ
from the PC implementation? The CS8900 is a nice general purpose ethernet
(10baseT) that looks like minimal fuss and sits happily on an ISA buss.

I was thinking about a SMSC 91C96 - works with 8 and 16 bit busses, has
more buffer memory, and most important: it's 99.9% software compatible with
the 10/100 non-PCI 91C111 (the only non-PCI 'all-in-one' 10/100 I know of)
- considering how difficult it is to get the drivers, at least we can avoid
doing it more than once. Oh, yes, and there are Linux drivers for both.
The only down side of the 91C96 is that it isn't easy to get it in the tiny
100 TQFP package (you have to buy a lot of them), whereas the CS8900 comes
as a TQFP (yes, I considered it - and the RTL 8019 - which has the unique
distinction of having the most ridiculous minimum quantity requirement ever
- 5! - and can only use half of it's 16K buffer when connected to a
8-bit bus).
Of course, on the Q40/60 the ISA is implemented as 16-bit so this appears
not to be a consideration. On the GoldFire it's an 8-bit implementation to
ease routing. The SMSC chips are nice about this because they can be made
to look exactly the same regardless of bus width. Not that I am doing a
plug here, but as I said, maybe we can avoid doing drivers more than
once...

 All drivers could be written on the Q40/Q60 and would immediately work
on
 XY-Fire, by just changing the register locations.

How do people actually use their machines these days? What's in demand?

From my experience, there are no two QLs (of any sort!) that are alike -
even in this respect!

Nasta





Re: [ql-users] Q40/Q60/??? Ultra IO card

2002-01-09 Thread ZN

On 1/10/02 at 4:53 AM Dexter wrote:

On Wed, 9 Jan 2002, ZN wrote:

 I was thinking about a SMSC 91C96 - works with 8 and 16 bit busses, has
 more buffer memory, and most important: it's 99.9% software compatible
 with the 10/100 non-PCI 91C111 (the only non-PCI 'all-in-one' 10/100 I
 know of)

I've used the SMC91C94, and I've checked out the spec sheets but can't
pinpoint any vital differences.

I can't really tell you off the top of my head, but I really did an
in-depth analysis. I was really looking at the 91C96 which is basically a
91C94 with 2K more buffer and full duplex capability. The larger buffer and
the way the 91C96 more or less manages it automatically is a great bonus. I
looked at something capable of holding two received and one ready to
transmit ethernet frame. A large buffer is quite handy for something like
SMSQ/E because it means less interrupts generated for given data flow.
Although SMSQ/E is very good with interrupt overhead, with high data rates,
the overhead can happen on a lower granularity level - waiting for the slow
IO device to respond to a bus cycle, for instance.
I do have a quastion: where did you get 91C94 samples and at which price?
I'm not even going to write the quotes I got for a few samples...

 If they're very similar, an FDC37C655 could provide parallel, serialx2,
 IDEx2, irda, with a couple of MAX232's, four 74HCT245's and a 74HCT245.
 But, that would cost way more than the multifunction cards they're
 already buying... ;)

Look at the 87C307 from national semi. Although it implements stuff that is
certainly not required on a Q40/60, I'm using that on the GF because:
1) It also has a RTC and CMOS RAM, and PS/2 keyboard and mouse
2) It is cheaper than competing products. I got samples from NS for $7.50 a
piece, and I've had it quoted for as low as $4 for 100.

Oh, yes - fruits of experience: DO NOT use HCT anything for IDE unless you
inted to:
a) Have extremely short cables to the drive
b) include termination resistors
The better alternative would be something like a 74LVHCR16245, or it's 5V
equivalent. It's one chip, comes in SOP case (extremely small) and does all
16 bits, AND has series resistive termination built in!
Two more plus a CF socket adds hot-swappable CF card capability...

 On the GoldFire it's an 8-bit implementation to ease routing.

One thing I always ask, and ask too late... Please provide a compact
alternative to the ISA socket, because the length of the connector forces
a practical minimum size on any board designer, and size costs...

No ISA or anything like it on the GF, except locally. It includes a
PC87C307, an AD1816, a 91C96 (or possibly 91C111?). That more or less
covers all the necessary ISA components. It essentially uses the same lines
as the QL bus but has specially generated write and read signals to be ISA
compatible.
The reason for this is that the connectors take up the largest percentage
of board area! Along with a bunch of peripheral connectors, there is a DIN
64 like on the QL, and a sicro-cinch 100 pin mezzanine for Aurora II. Most
pasive components are on the solder side of the PCB and the CPLD is hiding
under the SODIMM - all on a 100x160mm PCBneed I say more :-)

Also, I noticed the 68060's are available in a variety of packages now.
What's the preference?

The one that costs the least - and have you looked at the prices? Just for
fun I had avnet quote me a 68LC060, when the person on the other end said
$355 in quantity, I laughed my head off, she must have thought it was a
prank call... At one time I was also looking at a 68040V, only to find it
was about 260$ a piece...

Nasta




Re: [ql-users] Future of QL - Part 1E120 (I had to increment the it ! ;-)

2002-01-08 Thread ZN

On 1/8/02 at 2:11 PM Thierry Godefroy wrote:

 Actually, the graphic cards device drivers are probably the most
 complicated and sophisticated ones (especially when you take all
 ancillary extensions into account, such as the pointer environment)...
 I for one would hesitate to write such a driver from scratch (there
 the sources availibilty of existing drivers is more than a critical
 prerequisite) !  I think that the only one who is able to write such
 drivers is... TT himself (with the QXL or Q40 SMSQ/E sources, I guess
 this could be implemented in a few days...).

I can only agree on this, but then it should be obvious that the way the
drivers and the ancillary parts connect should be at least better
documented, or even partly rewritten for better 'separation' - in fact the
same should be true for the drivers themselves. It makes changes and
rewrites much easyer. OTOH, radically new graphics hardware only comes
along relatively rarely, I think that GD2 probably achieves most of the
above - but things like Phoebus' mouse question show that the available
documentation is inadequate. 

BTW one of the reasons Aurora stopped at 256 colors (the biggest one being
the available screen memory size) because at the time there was no way to
pass more than 8 bits for color information in any of the commonly used
routines, and investigations by Phill Borman had shown that the 16 bit
color specification code was not really there.

Mind you, I did download all the specs of the GF a while ago (you even
sent
me some) and I did read _some_ but, here again, time is the limiting
factor
;-(

There are very few people who have the knowledge and will to understand the
aspects of the hardware as required for programming low-level stuff. You
are one (out of three or so) exceptions :-)

I must confess, that I do like a lot the onboard Ethernet concept... :-)

Actually, this has proven to be a problem - not because of the design which
is very simple, but because of the rather unreasonable pricing for the
chip. OTOH there are alternatives, which are more reasonably priced, but
then you have to buy them in increments of 1000, which pretty much makes
the price difference irrelevant.
SMSC (the manufacturer of the chip I chose for the design) now has a 10/100
non-PCI chip, which is even simpler to use. The 'toast' board referenced on
this list has sown me a way to save some more (precious!) space on the
board by using RJ45 jacks with integrated magnetics - Ethernet requires
electrical isolation which is normallyaccomplished by a (relatively hard to
find!) small transformer (looks like a slightly larger chip) - one more
problem component, which is not that expensive, but difficult o find in
small quantity. Redesigning the 'tile' of the GF PCB containing the
ethernet to use the new 10/100 chip is not a problem at all, but one has to
wonder about using a 10/100 chip connected to an 8-bit data bus. The later
is a design decision to do with actually making the board routing possible.
GF does alow extremely fast cycles over the 8-bit bus, though, so it might
be worthwhile - if the new chip is priced reasonably.

Yes, I think I can remmember about a few emails we exchanged about the GF
design and interrupts management...  ;-)

Yes, and it was more or less the concensus that it should be implemented
the way I designed it, and have the software cope... which may be a problem
as history proves.

 Well, there's the GF stuff - feel free to open fire.

 But as much as I can understand, the actual specs are quite a bit
outdated
 (ColdFire based, while the processor of your choice is now a 68EC060
IIRC),
 aren't they ?  ;-)

Actually, from teh standpoint of the software designer, very little has
changed. Except for the differences between 68EC040/5102 and 68EC060 (cache
size, no MOVEP, etc) almost everything remains the same. Some interrupt
lines may get removed as it became obvious less interrupt routability is
needed, but the basic structure remains. The sound chip will be an AD1816
and there is no more provision for AD1815 - but both look exactly the same
to the software so no changes. Even if I decide to use the 10/100 chip for
ethernet, that is 99.9% register compatible with the previously used one
(of course, 100Mbit stuff was added). Some base addresses for the IO may
change but register assignments remain the same, etc.

Soon the motivation will result from the choice: PCI support or no more
cheap add-ons for the QL plateforms...

I don't deal with the present situation (there are still a few ISA cards
available), but with the _future_ one: if there is no PCI-based successor
to the Q60, then what the hell a future (say in five years) Q60 buyer will
be able to use with it ?  No ISA card = no floppy, no harddisk, no serial
port, no parallele port, no network, no nothing but graphic, sound and
keyboard !

Actually, you won't be helped much by javing PCI in either case as most of
these devices are integrated onto motherboards, and on the chip 

Re: [ql-users] Future of QL - The continuing Saga... Part 9.8E13 ;-)

2002-01-06 Thread ZN

On 1/6/02 at 7:32 AM Thierry Godefroy wrote:

[PCI on the QL]

 the problems are QDOS specific...
 ...Not many people around who have the thorough knowledge about the
 hardware and software to write more complicated drivers.

True, but knowledge can be acquired by anyone, the problem being the time
needed to do so... Note that most of the knowledge to acquire is NOT
QDOS/SMS specific, whatever driver you are going to implement on whatever
plateform, you need to learn about protocols, standards, etc...
As an example, for the ATAPI thing and the CDROM device driver, most of
the development time was spent searching, printing, reading and under-
standing the ATA, ATAPI and ISO specs (nothing less than 10Mb of PDF
files...). The actual implementation of ATA/ATAPI took about 40% of the
total time...

Thierry made several good points. Of course, many will not find them as
valid if they come from a standpoint of modifying an existing driver (and
recompiling it for a speciffic platform) rather than building one from
scratch. The way things are done under SMSQDOS (assuming you have divined
this 'from the source' :-) ) actually makes attempts at converting existing
drivers (unless they already come from SMSQDOS) quite unproductive.
Unfortunately, faced with extra time to either figure out an existing
driver (and possibly end up figuring the work was in vain) or reverse
engineer part of SMSQDOS code, the programmer may decide it's too much time
for too little effect.

 filesystem is hardwired into device driver.

This actually is not great problem in itself - it's the lack of
standardized ways to interface with existing code (which may or may not be
a part of a driver) that's the real problem. As things now stand, new file
defice = yet another copy of the file system code. All the more ironic
considering that there are means to construct drivers differently, as
Thierry says.

What I really like about Thierry's approach here with his CD driver, is
making good decisions and implementing it, rather than having the endless _
or \ discussion influence it :-)

The problem is rather the lack of professional programmers that could
actually work full time on QDOS/SMS...  TT is not the only man that
can write device drivers or OS extensions for QDOS/SMS, there are still
talented programmers around, but none of them are able to spent signifi-
cative time (i.e. a few months - full time) on QDOS/SMS software
development...
Now, with some good motivation, most of these programmers will be ready
to sacrify significant part of their free time to do some software
development.

The equation is very different if a huge potrion of that time, or even MORE
than that time is needed to first reverse-engineer code. Doing that is far
more time consuming than actually writing working code.
In light of the comments regarding access to SMSQ/E source code, it begs
the question: what would it take to BUY the source code from Tony Tebby?
Or, some kind of licencing agreement that would effectively have the code
in the open but with someone as the 'code cuistodian' - i.e. it would not
be freely distributable, rather under something liek a GPL.

 New hardware (INCLUDING UNSUPPORTED ONE under QDOS/SMS) is a very good
 motivation as far as I am concerned... IF a new 680x0 board with PCI
 bus is built, then I could well find enough motivation to write a bus
 initializer for it...

Which, of course would be the first 1% of the job since something has to be
plugged into it, and that's when all the fun begins :-)

The old debate should software be designed before hardware ? is a
nonsense to me: IF there is hardware I can test software on, THEN I
can write software for it.

Actually, it's a kind of chicken and egg argument. For some hardware to
actually start doing ANYTHING usefull it has to be initialized. In many
cases all of this has to be done first, before QMSQDOS loading isn't really
even in the long term plans (speaking in terms of CPU time, of course), and
developing such software is VERY difficult because much of it has to be
done 'blindly' - you write the code, program it into an EPROM or some
equivalent, or download to the hardware by another method, and hope it
works. If it does not, figuring out what is wrong can be a matter of using
some quite fancy and expensive tools and instruments. This seriously
impacts the job of a hardware designer - one would be foolish to implement
the hardware in such a way that requires extensive mods to the OS and
add-on driver code, knowing that the OS and driver sources are not
available - even if such implementation would ultimately be the better one.
Also, assuming such the initialization code was done and working, it has to
have a means to load the actual OS from wherever it's stored - frequently
from devices included in the initialization sequence. Unfortunately, the
way it's built now, this is a part of the OS, and this means there is no
easy way to modify it unless you have the source. There is a ton of 

Re: [ql-users] QL Schematic

2002-01-04 Thread ZN

On 1/4/02 at 4:22 PM Dexter wrote:

I'm still looking for a schematic of the QL PCB. Also, the 8049 details.

Actually, you do not need the 8049. All it does can be done using a
different device and handling the difference on the software side
(translation: modified OS code).

I'm thinking of prototyping a 680X0 board with the basic QL facilities,
minus the dodgy serial and net ports, plus IDE. I have a strongarm design
here which has USB and ethernet and I'm seeing if there's a way to take
advantage of any of that knowledge.

A quick question: what do you use for USB?

Also, the +5v is obvious, but if you ignore the net, serial and microdrive
interfaces, was the +/-12v used anywhere else?

Only as a bug fix for the 8302 ULA which incorporates the 9049 IPC
interface, microdrive logic, reset, half the serial ports and the network,
all of which you can forget - except for the RTC. The later is a simple 32
bit second sounter. If you don't use a 8302, then you don't need +-12V.
Serial port levels can easily be handled by something like a MAX232 or it's
many successors and clones.

Finally, on some QLs there was a variable capacitor (?) mid-board. What
did it do? How was it set? It disappeared on later revision boards.

It was used in the RTC oscillator (32768Hz crystal plus a few other
components). The purpose of the variable cap was to adjust the exact
frequency. However, the osc. circuit used in the ULA does not lend itself
to precision, and the select logic for the 8302 has a bug which more often
than not corrupts the RTC register when power is applied. With later board
revisions, any pretense at battery back-up for the RTC internal to the 8302
was abandoned, and with it the need for the RTC to be very precise - it was
enough that the drift was not too bad within a single session (between two
power-ups). Later, the GC and SGC boards introduced their own RTC which
operates by loading the contents of the local RTC into the 8302 RTC, where
it can then be compatibly read.

I am really very intrigued at what you propose to design - especially since
I am myself involved in an (ever ongoing!) process of designing a QL board
to replace the GC/SGC. Even so, it seems there is a real need for something
that would be a true replacement of a SGC, rather than it's successor,
something like a SGC II. The demise of the SGC is connected to an Altera
EP18010 being used for all theglue logic, a chip which has long ago been
obsoleted. The FDC controller on the SGC is also not available any more,
but it has a fully compatible replacement (made by SMSC). Also, the current
state of affairs with memory, would make a potential 68EC020 based board
for the QL very tiny - 16M of RAM could easily be squeezed into two SDRAM
chips - and that only because 16-bit wide ones are easyer to get then the
32-bit wide ones. Something along these lines would make a great QL based
SBC. Not one to contend for the fastest QL out there, but it should be very
practical.

Nasta




Re: [ql-users] QL Schematic

2002-01-04 Thread ZN

On 1/4/02 at 9:55 PM Dexter wrote:

On Fri, 4 Jan 2002, ZN wrote:
 A quick question: what do you use for USB?

[Microchip]

...I thought so. I've looked at the USB97C100 (not directly in connection
with QL hardware, though) and like most implementations, getting one's
hands on a non-PC speciffic controller proved to be impossible once again.

 Altera EP18010... a chip which has long ago been obsoleted.

I'm assuming this is a ULA/FPGA. If so, is the logic it represented
available anywhere? The Q40/60 use FPGAs, so we definitely have the skills
to prepare a new part within the community...

Please make the appropriate distinction between a CPLD and a FPGA (although
either moniker hardly describes the devices commonly found in either
group... but that's a semantics discussion). The EP1810 is a (for today's
notions) relatively low density CPLD, however, one that has some rather
unusual characteristics. The logic in it is sort-of' available, if we
manage to ever decipher it's original designer's files :-) but even so, it
would really need a redesign as most today's CPLDs have a different
organization, for which using the logic hand-optimized for the EP1810 would
just be an impediment.

(Sidebar:- Why did you use four FPGAs on ther Q40/60? Was it more cost
effective, and/or were the devices representing functional blocks that you
wanted to keep separate?)

I'm not Peter, but I think I can answer that: the best ratio between
function and price. I'm going to be using one large CPLD on the GoldFire
simply because of space constraints.

And to answer another of your questions: I have uploaded all the Aurora
files (schematics and sources to the three CPLDs) as well as a lot of other
stuff about Qubide, and the GoldFire, into the files area in the QLhardware
egroup. Some of the GoldFire stuff is obsolete, specs have changed but not
looking from the standpoint of the functions available.
Anyone can read this, but only members can upload. I understand many people
have (justified) qualms about joining egroups. The best way I have found to
do this is to get one of the many free email accounts, and promptly forget
it. Use it only to sign onto egroups, then set your group to 'no email'.
You can check the messages on-line, it's not like the group produces many
messages, so no bandwidth problems there. The 'forgotten' email address
will serve as a spam catcher.
I could easily transfer these elsewhere, but at least Egroups alow
relatively easy administration...


If I were to go out on a limb, I would specify:

Capacity:
68060 - at 50MHz or better
4MB flash ROM - for QDOS, SMSQ/E etc, plus room to grow.
32MB RAM - expandable by daughter card[1]

Interfaces:
Monitor - SVGA, XGA capable
IDE - ATA if possible
Ethernet - 10/100 if possible[2]
Serial - up to 115k
Parallel
Sound - line in, line out, mic
GPA - General Purpose Adaptor[3]

[1] Daughter cards or use SIMMs/SDRAM and throw half the capacity away may
be cheaper...
[2] I think this is essential...
[3] This would be to this board what the expansion port was to QL classic.

OS:
This is much more vague, because it isn't my area, but I'd be looking
for modularisation of the OS.

Ok, not to demean any of your other comments, I'll get to them in a later
email (hopefully :-) ).
If you go for the above, you will be creating the third system along these
lines. The first one is obviously the Q60, the second - still largely
paperware except for the chips sitting in my drawer here - is the GoldFire.
The Q60 does not have integrated ethernet and IO, the Goldfire does not
have integrated graphics and IDE (and is intended to work as a 2 or 3 board
system using either old QL peripherals or new, yet to be designed
replacements). It also has only 10BaseT ethernet, but that could easily be
upgraded to 10/100 using the same SMSC chip as used on the 5307 Toast
board. BTW I wonder where they got the neat sockets with integrated
magnetics, I've been looking for those for ages!




Re: [ql-users] Interesting Site

2002-01-03 Thread ZN

On 1/3/02 at 6:25 PM Peter Graf wrote:

Hi Nasta,

 I'm sorry, but I must destroy any hopes that Coldfire CPU's
 (version  1) could execute our existing QL software.

I would very much appreciate if you could give us some more
details on this.

Two examples for CF V4e:

- mov.b xxx,-(sp)' adjusts the stack pointer by -1
- lsl and lsr set the overflow bit to the same as the carry bit

Both can not be forced to trap out. What a stupidity.
No way code on the CF core can solve this in general.
(Except emulating the whole 68k CPU, of course.)

What stupidity, indeed! I recall the first example being a bug in early
68000 silicon... looks like someone used the old source code! The second is
just utterly stupid, there is no reason to do anything with the overflow
bit doing LOGICAL shifts (not arithmetic). This has to be a bug. Now I'm
glad I didn't go redesigning anything for the V4 coldfire...

Even so, the fact remains that we have run out of 68k CPUs for the time
being. We can whine about not having 68070 etc, or we can change the
software that keeps being incompatible. It doesn't take much to figure out
which one is the more doable.

Nasta




Re: [ql-users] QDOS TCP/IP (was: Welcome to ql-users)

2002-01-03 Thread ZN

On 1/3/02 at 8:56 PM [EMAIL PROTECTED] wrote:

 actually it would be easy to get Ethernet and TCP/IP working in QDOS. 

Do you think of a port or writing a driver from scratch?

I think what is referred tois the fact that TCP/IP uses Ethernet framing
anyway, so getting ethernet hardware to work with it is relatively simple.
In fact, even using the QL net protocol over Ethernet would not be any big
problem. Of course, not as usefull as TCP/IP but a start for sure.

Nasta




Re: [ql-users] US QL differences...

2002-01-03 Thread ZN

*** REPLY SEPARATOR  ***

On 1/3/02 at 9:00 PM Dexter wrote:

Hi all,

I've been sat like a hawk on Ebay waiting for a QL to come up for auction.
I recall the US QLs had a different ROM version and PSU, but are their any
other differences?
Is there a PSU and/or power supply section schematic anywhere, so if I got
one shipped over, I could convert it for US 110v 60Hz operation? I'm
competent to do it, *with the right information*...

Dave

You need a 8V DC minimum at 1.8A (1A is more than enough if you don't use
any peripherals, but then you'll hardly be able to do anything much) and a
12-15V AC power. The 8V DC goes into a 7805 regulator, the 15V AC is used
for a +-12V supply (there is a rectifier and a 78L12/79L12 regulator pair
in the QL). If you are willing to modify the QL a bit you can use a
+5/+-12V power supply, these are commonly available at various surplus
dealers.
Also, if you can get a RGB monitor capable at operating at the right synch
frequency, don't bother with a US QL, you will likely just give yourself
more headaches with incompatibilities. An old Sony 1301/2 or 1401/2, NEC
Multisynch 3D and similar are perfect for the QL and it also goes up to
SVGA. Again, you should be able to get one for $50 or so at surplus
dealers. You might need to do a resistive divider on the RGB lines to get
the TTL output voltage down to 1Vpp standard RGB analog input level.

Nasta 




Re: [ql-users] pointer env ..?

2001-11-14 Thread ZN

On 11/14/01 at 9:37 PM Phoebus R. Dokos wrote:

 I did use a PS/2 mouse when I had my Aurora/sH :-) It's a standard
 serial port apart from the difference in the packaging (mini din instead
 of Sub-D9).

NO!!! This is NOT correct!!!
There are mouses that will work as either PS/2 or serial using a mini-din
to D9 adapter, but it is NOT the general case. Some mice ONLY work as PS/2
and some only as serial. Even universal mice (PS/2 and Serial) usually use
different pins on the mini-din for serial conenctions.
PS/2 is NOT a serial port (and if you insist I will post the relevant
specs). It is a synchronous port that requires a clock and a bidirectional
data line, which are all 5V only. The signal spec is the same as the PC
keyboard, NOT the serial mouse. Connecting a PS/2 only mouse to a serial
port can damage the mouse.

 I was thinking it wouldn't work at first but I was influence from 
 the PC which uses the PS/2 port as a different kind of serial
 port with its own IRQ and only one baud setting.

No it's not. It's actually another keyboard port, and is handled by the
same microcontroller (8042), just using a different interrupt.

 However from an electrical standpoint its 100% the same :-)

Not even nearly.

Nasta




Re: [ql-users] IR communication (Was: Psion Chess)

2001-10-31 Thread ZN

On 10/31/01 at 11:14 PM Dilwyn Jones wrote:

 He he, next step is to get one of those remote controller watches to
 remote control the QL, or to get software ont he QL to drive an
 infrared LED or something to work with such a learning or programmable
 remote control. Now if Simon Goodwin and Al Boehm can get a MIDI synth
 to work via a QL network port it shouldn't be too hard to interface an
 infrared LED to a network port either so we can multitask a little
 button program on the QL to cange channels, mute the volume when the
 phone rings, videoplus the TV and video, etc etc. Any offers anyone?

Ah, what an interesting idea! It should actualy be quite trivial to add an
IR LED to the network port.
Here is another idea, and I'll just put it in the public domain here since
I don't have the time to develop it: With some rather simple mods to the
internal network circuitry in a QL it would be possible to make the network
completely IR based. I wonder why it wasn't originally done that way...
Basically, the 8302 ULA has two pins, net in and net out. They only get
combined into one single shared wire externally, using a couple of
resistors and a transistor (there is also an overvoltage protector on the
Aurora). The software driving the port is responsible for the input being
ignored while the output is transmitting. All it would take would be to
have a buffer driving several IR LEDs (pointing in different directions so
you don't need to aim the communicating QLs accurately) from the net out
pin, and an IR detector driving the net in pin. A bit more might be needed
if one would want to avoid upsetting any other IR devices around, but not
much... All of these components are easy to find and cheap.

Nasta 




Re: [ql-users] Clive Sinclair

2001-10-19 Thread ZN

 Even with current low cost silicon, there is still a high rejection 
 rate. That, as was said , cannot be afforded with a bigger chip.

Actually, bigger chips increase the number of rejects incredibly. The
reason is very simple: the basic idea behind chips in most cases relies on
all chip components being operational, i.e. there is no fault tolerance.
There are some exceptions, though, see below.

Take a silicon wafer. Now immagine there are specks of unusable silicon,
say a couple of um across randomly distributed with a density of one on
every 2 square cm. There are two factors that govern the yield:
1) size of geometry - if this is  than the size of the anomaly, there is
a good chance an anomaly will only produce a degraded component, not a
completely faulty one. However, today, almost all geometry used is far
smaller than the sizes of the anomalies, so we run into problem:
2) size of the chip. Obviously, if the chip size is 2cm square,
statistically ALL of them will have a fault, i.e. yield will be 0. As the
chip gets smaller, somewhere around half of the 2cm square area, the yield
suddenly goes up quickly. For very small chips, the number of failures
approaches area_of_anomaly/area_of_wafer * 100 in %, i.e. the yield becomes
virtually 100%. This is why small signal transistors, having a very small
die, cost pennies, but a CPU that has a die of 100 times the size, does not
cost 100 pennies - all sorts of additional processing are necessary to even
get 0 yield on these, and it has to be payed for.
In reality, anomalies on silicon are not the only problem, there is a vast
number of different pollutants that can affect the process of making a
chip, but the basic behaviour is the same. This is why chip prices are
extremely dependant on chip size, and why all the manufacturers try to
squeeze the size of the die down as much as possible.

This problem first became evident with memory, as these were traditionally
the largest chips. For a long time it held back the jump from 64k bits to
256k bits for dynamic RAM. Finally, someone figured out that providing
extra RAM 'rows' that were programmable will include a dose of fault
tolerance. However, this came at a price - traditionally, with a shrinkage
of geometry, comes a corresponding shrinkage of delays, i.e. speed
increases. But since the 256kb DRAM needed 'programmable' rather than fixed
row decoders, some of the speed benefit was lost, so the 256kb DRAM chips
were of the same speed grade as 64kb chis available earlyer, so one speed
grade jump was 'missed'. Today all memory produced has some fault
tolerance, it is tested and then appropriately programmed at the factory.
Furthermore, even CPUs have a dose of fault tolerance. It is mostly evident
with chips that come in different falvours, such as different speed grades,
or cache sizes (an asside: being memory, most caches in todays big CPUs
also have 'extra' memory cells to provide fault tolerance, or, in some
cases, error detection and correction schemes). This same problem reared
it's head in the manufacture of active matrix LCDs, which are the most
extreme form of a chip - the size of the whole screen. Cost effective sizes
ginally jumped from about 9 diagonal to more once they figured out how to
make the displays line by line - the lines are produced on a drum and
'stuck onto' the glass, one by one. They are tested as they get stuck, and
if found defective, the whole line is scrubbed off, then replaced by a new
one from the drum - instead of throwing the whle screen away. In parallel
with these technology, material technology advances also, so as time goes
by, and prodcts mature, they actually move downwards in the technology
chain. For instance, to get the first 15 LCDs, the drum technology was
required. Advances in materials made it possible to produce 15 screens
using traditional technology today, but at the same time, combined with the
drum thing, now you can make 19 screens.

Nasta




Re: [ql-users] Clive Sinclair

2001-10-19 Thread ZN

On 10/19/01 at 8:39 PM Malcolm Cadman wrote:

The wafer technology was of course based on chip technology and design
and
testing procedures of the time. Today it would actually be easyer to
produce them...

Very interesting, Nasta.  As you say the technology is always moving on,
and what was 'difficult' to achieve 10 to 15 years ago, is now everyday
technology.
So, the patents for Sinclairs' version of 'wafers' has long since gone
to other companies ?

Probably. I learned from Catt's page that they went back to the bank, who
no doubt sold them somewhere to make money off of them to pay the debts.
Speaking of various interesting technologies, I wonder what happened to
Sinclair's asynchronous microprocessor. That would have been a great
project too, especially since some people at the Frauenhoffer isntitute
(IIRC) developed it quite far, using self-handshake logic instead of
clocked logic.

Nasta




Re: [ql-users] Hyper Browse Development

2001-10-04 Thread ZN

On 10/3/01 at 4:53 PM Phoebus Dokos wrote:

As for Xitami, I have been using it for over a year now and although it
is
simple and relatively powerfull, it also has, at least on the PC some odd
problems...

Actually it's not Xitami's fault. I suspect you are using the NT version 
which tends to have such behaviour under Win9x. There is a SPECIFIC
version for Win 9x...

As far as I can see, there is a W9x/NT and a Win 9x 'vanilla' version
(whatever that means). I'll try it when I next get an hour or two of
time...

Nasta




Re: [ql-users] Hyper Browse Development

2001-10-03 Thread ZN

On 10/3/01 at 12:36 PM Phoebus Dokos wrote:

 SB being a Server Side Script language is feasible now with UN*X
 systems running uQLx And a QL specific web server is relatively
 easy to implement once the TCP/IP stack is stable and complete.
 Xitami for example (www.imatix.com) is an ideal candidate for that
 reason...

There is absolutely no reason why SBasic couldn't be used as a SSSL. All it
would need is some sort of CGI (or whatever) interface implemented in the
web server.
As for Xitami, I have been using it for over a year now and although it is
simple and relatively powerfull, it also has, at least on the PC some odd
problems. For instance, sending the wrong files (for instance, it's
supposed to give you file xxx.gif, it sends yyy.gif - every time,
consistently, but even the log says it's sending xxx.gif), acting as a
memory leak, suddenly crashing when attempts to resume transfer are
detected, having problems with ftp transfers, etc. It would need a lot of
work to get it right. As a matter of fact, I would greatly apreciate if
someone could tell me about a small, simple and _reliable_ web server to
work under W98... (I know, I know, off topic...)

Nasta




Re: [ql-users] Q60

2001-09-22 Thread ZN

On 9/22/01 at 2:44 PM Peter Graf wrote:

 You are right, i should have been more precise,
 the QXL WAS the closest hardware

 And NOW it is the Q40/Q60, isn't it? ;-)

Oh, yes, definitely. The Q40/60 is definitely the closest type of hardware,
for several reasons. For instance, the GF IO chips are all designed for PCs
originally, and the Q40/60 uses an ISA PC IO board - the similarity is
obvious.
In some ways, there still are similarities between the QXL and the GF, in
particular the way IO is relocated and the way the original QL screen is
emulated.

When it finally materialized (somehwere around the time the Aurora
became available) I already had plans to do a SGC successor because
it was clear Miracle was pulling out of the QL market.

You must have had a lot of insider knowlegde about Miracles policies.
After I announced the Q40, Miracle came up with a new competitive
announcement in QL Today. Back then, I took the announcement seriously,
but from what you say, Miracle had already pulled out.

Frankly, I don't think I knew much more than anyone else, getting
information about plans from Stuart is worse than pulling teeth :-) Even
so, Stuiart is a geat guy and i am really sorry he isn't involved with the
QL any more, or at least his involvement isn't public any more.
In any case, we talked at one of the meetings (I think in Italy) and Stuart
sent me the 5102 User's Manual. We later had a series of phone calls and
faxes exchanging ideas on what could be done with it. The idea of a QXL II
on the PCI bus was also mentioned, but it soon became obvious that a PCI
bridge chip would be more expensive than all the rest of the board.
You have to remember that I haven't been to a lot of meetings so I don't
know what was being said or discussed on them, and what other proposals
were mentioned. In fact, I honestly don't remember Miracle's
counter-proposal to the Q40.

 The 68040 doesn't just compete, it clearly outperforms the 5102.
Yes, though the difference would not be that spectacular.
Agreed, except for FPU stuff like Povray or other C programs.

Of course, since the 5102 doesn't have a FPU. But for QL programs it would
have acceptable performance.

 Using a 68EC060, as I said in the original mail, presents a few
 challenges.

Which is, in other words, a new concept and new design work.

Well, yes and no - if you only knew how many iterations I went through with
the GF... the history from the last mail was VERY abbreviated. It is a not
a new concept in the sense that I have considered it before, quite a long
time ago. It is a new concept because what was a consideration before, now
needs to be completely developed, so unlike before where I only saw
problems with that idea, now I have to develop solutions to them :-)

Are you sure that the users who waited for the announced GoldFire wouldn't
prefer a *finished* Coldfire 5102 design to the new challenges of a
68EC060
design? 
(After all your price for the Coldfire 5102 chip was still cheap.)

At this state of completeness, there is absolutely no difference at all.

[details about DRAM interface and multiprocessing snipped]

If I was in your shoes, I would think twice before I spend my time with
multiprocessing and the best DRAM interface for it. If the design and the
operating software development is finished someday, there will be other
and
faster semiconductors anyway.

Ordinairly, I would agree, but as I said, since I cannot at the moment
start actually implementing the hardware, such an approach is beside the
point. I still have the option to think, though, so I do - as for the best
DRAM interface, actually most of the multiprocessing interface would have
been implemented on the second CPU board, only the very minimum was
included on the GF. It doesn't seem so from the block-diagram that I have
on the web, but keep in mind that was made to make the idea clear, not
necesairly the actual implementation. Of course, now this will have to be
updated :-)
The idea of a more efficient interface was born when I decided to use
SDRAM. This was really mostly made for me by the price of the components,
and the fact that it opened so much space on the PCB.
It is actually very simple to connect two CPUs to a shared bus, but there
are things one has to figure out such as read-modify-write cycles and
caching - but there is always the fact that two CPUs only get half of the
bandwidth of the single CPU each, at best. It turned out that I could get
tthe CPUs to overlap quite efficiently, which also solved the problem of
read-modify-write. But as I said, the 68EC060 is a step back in this area,
fortunately to a concept that has already been designed, and has it's own
set of reasonably balanced advantages and disadvantages.

The PCB was designed that way, it has distinct areas that can be
re-designed as needed.

Doesn't that cost siginificant board space and routing flexibility?

Not really as the necessity for trace reduction always exists, the basic
layout of the GF is such 

Re: [ql-users] Q60

2001-09-21 Thread ZN

On 9/20/01 at 10:27 PM Peter Graf wrote:

 Yes, the original idea behing the GoldFire was to use the
 QXL version of SMSQ as the basis for it, as that is the
 closest related hardware.

 I don't think so. The Q40 is *much* nearer to your earlier announcement
 than the QXL. Think about memory map, interrupt structure, screen memory,
 Qubide, IO chips and so on.

You are right, i should have been more precise, the QXL WAS the closest
hardware - the idea for a GoldFire was concieved long before it was
announced. It stemmed from a discussion with Stuart Honeyball about a
PCMCIA version of the QXL, and the possibility for me to do some design
work on it. However, it was soon scrapped as the announced MCF5102
continually failed to become available on the market (where have we heared
that before?). When it finally materialized (somehwere around the time the
Aurora became available) I already had plans to do a SGC successor because
it was clear Miracle was pulling out of the QL market.

 The 040 would have to run at it's highest clock available (or close
 enough) to compete with a (at the time) cheaper 5102

The 68040 doesn't just compete, it clearly outperforms the 5102. It's a
pity that you have cancelled the GoldFire. I would have enjoyed to see the
Q40 win the benchmarks ;-)

Yes, though the difference would not be that spectacular. I usually
consider 6dB :-) (2x) difference as signifficant...

The idea is really to cover the costs of manufacture, and of course,
the necessary firmware, that's all.

If you cover the costs, you'll earn a lot more than I did.

Currently I am WAY below due to the Auroras that were never sold...

 Also, the GF has not been canceled

 Point of view. I remember well that you described the Coldfire
 5102 CPU as the very heart of the GoldFire and it's multiplexed
 bus as the most essential feature that turned the project from
 fantasy into must be done. You also ephasized the importance
 of the smaller CPU size compared to PGA chips.

Actually, the multiplexed bus gave me the idea behind the implementation of
a 32-bit bus protocol on the QL's expansion bus. Later on the multiplexed
bus reduced the number of lines needed to communicate with the logic chip,
and reduced the number of traces on the board considerably. Yes, the
ColdFire's small footprint was very important, and in fact made the
GoldFire physically possible when it was originally specced out using
72-pin SIMM RAM.
This has since been changed to SODIMM SDRAM, freeing a LOT of space. At the
time it sounded like a good idea, right now, doing anything else would be
foolish as the sprice of SDRAM is very low - it is almost a given that the
GF (or whatever it's name is going to be) will come with 128M of SDRAM as
standard, it's cheap, and if I only have to design for one configuration,
it simplifies the logic some.
Once SDRAM was in place, it even became possible to physically put BOTH the
PGA and the PQFP package, the first for a 68040V and the second for the
5102. It was a possibility which was soon discarded when the 68040V turned
out to cost about 200$ a piece in huge quantity, plus it turned out that it
would be extremely difficult to do a dual footprint PCB on 4 layers, and
retain signal integrity. That's how I came up with the SIMBUS concept, a
rehash of a very old idea I had. The extra space previously used by the
planned PGA package was used up by two buffer chips that convert the 5102
bus into the very similar SIMBUS.
Using a 68EC060, as I said in the original mail, presents a few challenges.
Space is again one of them which is why the SIMBUS concept had to be
abandoned in favor of a direct CPU to SDRAM conenction. Address lines still
have to be multiplexed externally by a LVCMOS chip, and it is going to be a
challenge fitting one onto the board, as space is again at a premium. A
little help is coming in the form of abandoning a dual footprint for the
sound chip (used to be AD1815 or 1816, now it's only 1816).
The deletion of the SIMBUS means that a lot of the potential bandwidth of
the SDRAM is wasted - the process of setting up a SDRAM access takes about
as much as the access itself. The construction of the SDRAM alowes another
access to proceed while the next is being set up. However, since the CPU
only does one access at a time, effectively doing setup-access-repeat, the
ability to use this overlap to one's advantage is lost. The SIMBUS idea
alowed two CPUs to overlap their access and thus use the SDRAM to full
advantage. SIMBUS would add a 10% memory access speed penalty for one 5102,
but two would still have a 10% penalty and be able to access the same SDRAM
- even if the wanted to do it at the same time.
Now I am back to the shared bus system using arbitration, which means that
one CPU gets 100% of all the bandwidth it can use (no overlapping), two
CPUs then share that 100% getting 50% each. On the positive side, the logic
is simpler, amongst other reasons, because the 68060 arbitration is better
than either 

Re: [ql-users] Q60

2001-09-19 Thread ZN
 not really promising anything - because,
frankly, I cannot promise anything (I am not going to bore you with my
personal situation here to explain why in detail). I am just trying my best
to design viable hardware, because that's what I like doing - in fact,
these days it helps a lot to keep my head from exploding :-)
To tell the truth, there is nothing I'd like more than cooperating in a
design of the 'ultimate QL'...

But I am digressing. The fact is, the Q40 and Q60 are here, now. They are
both a great product and quite possibly the key to the QL's continued
survival. Is there going to be a GF? Yes, if I have anything to say about
it - perhaps with a different name. But right now, I think I couldn't tell
you when, even using the worlds most accurate crystal ball. What I can, and
will do, is tell everyone what it is I am doing or at least trying to do. I
consider this to be fair, I am not hidin anything - and people have a
choice, to wait for an  indeterminate amount of time, or buy something that
is proven right now.

Good luck with the design!

Thanks!

ZN




Re: [ql-users] Q60

2001-09-18 Thread ZN

Not entirely on-topic... and probably a question for Peter Graf:
The Q60, AFAIK uses either a 66MHZ 68060 or a 75MHz (clocked at 80MHz)
68LC060. My question is: were any tests conducted with clocking the 66MHZ
regular 060 at a higher clock, and if so, what were the findings? 68k CPUs
are know to be very conservatively spec'd.
I've already asked this on my QLhardware e-group, but got no reply.

Regards,

Nasta





Re: [ql-users] Q60

2001-09-18 Thread ZN

On 9/18/01 at 10:20 PM Peter Graf wrote:

I'm not sure my first reply got through, this new NT virus is bombarding my
web server which also has the proxy program, until it eventually crashes.

[Overclocking a 68060]

The Q60, AFAIK uses either a 66MHZ 68060 or a 75MHz (clocked at 80MHz)
68LC060.

 Almost. The 66 MHz version is a 68060RC60A (60 MHz) chip clocked at 66
MHz
 ...a 60 MHz chip, overclocked by 10 %. The heatsink is largely oversized,
 so the die is actually a lot cooler than with normal operation at 60 MHz.
 I even ran the 68060RC60A at 70 MHz and more without noticeable problems,
 but I wouldn't use that for production.

Thanks for that info. The 68060 cleverly provides a thermal sensing
resistor on chip, so at least temperature can be conclusively measured.

68k CPUs are know to be very conservatively spec'd.
Confirmed.

Well, at least there is someone with experience to ask :-) I am very
grateful for this data.
But as they say, give them a finger and they want the whole hand :-) - I do
need a bit more data.
First, there have been some impressive figures posted on the Q60 web site
about the power consumption. From what data I could find, it seems that the
060 has signifficantly lower power consumption than the 68040 - no doubt
due to the lower supply voltage. What would be your assesment of this?
Second, you had mentioned on the list that the Q60 cannot use a 68EC060.
Would you care to explain in a bit more detail why not? I am guessing that
it has to do with the interaction of the compatibility requirements in
SMSQ/QDOS and the changes in the memory map on the Q40/60 that were
necessary to add new capability.

 I've already asked this on my QLhardware e-group, but got no reply.

 I had replied on the ql-developers list. The list owner has kindly
 alowed hardware development issues there, and I very much prefer
 open mailinglists to Yahoo-groups.
 I thought you were subscribed to ql-developers. I apologize for not
 sending a copy of my reply to your personal adress.

No apology is necessary, I thought I was subscribed to QL developers too,
but it seems that my subscription had somehow lapsed. This would not be
surprising as there were several problems with my email due to the various
DOS attacks on Croatian sites, my main mailbox is still on servers in
Croatia! I will look into this shortly.
I can't blame you for not wanting to use an Egroup forum. The prerequisite
to use them and not drown in spam, is to have a 'sacrificial' email account
for the spam, and use only the web access for the egroups. Unfortunately,
if that isn't done from teh start, there could be spam.

What is the background of your questions? Do you plan to add a 68060
upgrade socket to the GoldFire specs?

Ah, well, I guess the cat is going to be out of the bag anyway, so I might
just tell everyone.

Currently, the GoldFire (which might actually need a name change, see
below), is a good 3 years late. I know that I keep promising it, and now,
amongst other things, it's a question of honor to produce it. However,
since it's so late, and I cannot for many reasons invest as much work and
money in it as I would like to, I try to upgrade the spec where I can,
without incuring extra cost in developement time or the final cost to the
user. It would make no sense to eventually produce a 3 year old design.

Recently my 'GF fund' got a little boost and I decided to start looking for
a supplier for the ColdFire 5102 CPU that would have it at a reasonable
price and quantity. I finally found a small supplier that had a number of
batches of Motorola chips. I was shocked to find this in the price list:

MCF5102 @ 40MHz, $19 a piece, minimum order 50 pieces.
68EC060 @ 66MHz, $10 a piece, minimum order 50 pieces.

What would you do???

In short, I now have a batch of 68EC060. Using it on the GF presents a
couple of challenges, but they are well worth the increase in performance.
As a result, the dual CPU feature has been simplified, in favour of alowing
a single CPU implementation to work as efficiently as possible. This
actually makes the logic simpler! As far as the 'EC' vs 'real' 68060
matters, there is no difference in the design since the 5102 is essentially
a 68EC040 with a smaller cache. The name 'GoldFire' may be changed since
there is no more ColdFire CPU to justify the original moniker.

The reason I asked the question is that for hardware reasons, it would be
simpler for me to clock the CPU at 70 or 72MHz, which is less than 10%
overclock. I was fairly certain that it could do this with ease.

Let me also take the oportunity to answer a question before it is asked:
I do not consider the Q40 or Q60 in any danger of competition at this time
- the GF is still largely paperware, and it will, unfortunately, stay so
for a while longer, though things ARE moving. Plus, it will be 68EC060
based, though upgradeable (no stopping that, the 'real' 060 or the LC060
are all pin-compatible). It will also NOT be possible to add a different
CPU 

Re: [ql-users] qxltool and multipartitions

2001-07-12 Thread ZN

On 7/11/01 at 11:06 PM Timothy Swenson wrote:

 Is it OK to run with just a Slave IDE device and no master?

NO! The best scenario that can happen is that the drive will not be
recognized as being there. The worst scenario... well, I guess you have
learned about that one!

Nasta




Re: [ql-users] Compact Flash (was: Syquest and Q40 question)

2001-07-09 Thread ZN

BTW I am successfully using CompactFlash as removable media for Q40/Q60...
I use a special PCMCIA/CompactFlash-IDE adaptor. But attention, not
all CompactFlash-IDE adaptors work. There are also differences
between CompactFlash cards. This needs further investigation...
CompactFlash has the disadvantage not to be well-suited for hot-plugging.

Peter

Peter, if it's not too much trouble, which adaptor / CF card(s) do you use?
I've been keeping two Lexar Media cards (8M and 32M) around for a while
now, with the intention of trying them out with Qubide. I would apreciate
some details regarding CF problems.

Thanks,

Nasta




Re: [ql-users] Syquest and Q40 question

2001-07-09 Thread ZN

BTW, I CompactFlash (aka Digital film) on the PC.  The driver on the PC 
does not require a reboot when changing CompactFlash cards.  It senses
when a new one is put in or one is taken out. I'm using a little USB-based

adapter called Jump Shot made by Lexar (same folks that make the 
CompactFlash cards). The CompactFlash card is viewable from inside My 
Computer but it does not have a drive letter assigned to it.

This is because the Jumpshot is essentially an USB adapter cable. ONLY
Lexar Media cards can use it, because these cards actually have USB built
in - and this technology is a Lexar patent. The Jumpshot is nothing more
than a piece of cable connecting the proper pins on the card to the USB
port. As USB is hot-pluggable, and the card has on-board USB, hot plugging
is not a problem - but, as I said, ONLY for Lexar cards. Other cards do NOT
have this feature.

CF cards can be hot-pluggable, the required hardware is practically
trivial. The real problem with CF hot-plugging is that most adaptors use
the CF as a standard IDE drive, so it is the software that ultimately
prevents hot-plugging, because the existence and size of the hard drives
are only checked at system bootup time. It does not expect the whole drive
to be removed, and a new one, possibly of different size, to be connected!
Note that removable media such as Syquest, ZIP, LS120 is different in this
respect. When there is no media, these still look like a hard drive to the
computer, but return a 'no media' error. With a CF, when it is removed,
there is nothing to return errors, because there is nothing there at all.
In some cases there are other problems, such as removing a CF card that was
a master drive with a slave attached.

I'm guessing the Adapter that Peter is using makes the CompactFlash look
just like a HD or floppy.

Every CF card MUST have IDE compatibility mode to be considered fully CF
compliant. The card is made to act as a standard IDE drive by connecting
one single pin to ground (IIRC). All that is really needed from that point
on, is a way to wire the 50-pin CF conector to a standard IDE and power
supply cable.
This however does not mean that all CF cards will work in the IDE mode.
Some are not fully compliant, mostly older and smaller capacity cards, or
cards that use the same form factor but are proprietary and bear the same
name as the digital camera or PDA they belong to. General purpose CF cards
should be fine.
IBM makes tiny 1 hard drives that generally fit the CF card form factor,
but are slightly thicker. This is known as CF Type II. These drives (called
Microdrive - immagine that! :-) ) should also work with an appropriate
adaptor, and currently come in 4 sizes, from 160Mb to 1Gb.

Nasta





  1   2   >