[cctalk] Re: interlace [was: NTSC TV demodulator ]

2024-05-20 Thread Peter Corlett via cctalk
On Mon, May 20, 2024 at 12:06:13PM -0500, CAREY SCHUG via cctalk wrote:
[...]
> so, just curious. how do digital TVs (and monitors) work? I presume the
> dots are a rectangle, not sloping down to the right, no half a line at the
> top and bottom. Do they just assume the brain can't tell that (for the
> converted old analog tv signal) the image therefor slopes UP very slightly
> to the right from what it "should" be? and the top line is blank on the
> left side because that is the interlace frame?

The half-lines are not visible on an analogue CRT (unless it's faulty or
miscalibrated) because they're hidden behind the top and bottom of the
screen bezel, assuming that they're even sent to the electron gun at all.

A digital TV displaying an analogue signal will just crop the image to
simulate the bezel, since there's a lot of other cruft and noise in the
signal which is not actually picture data and would be quite distracting if
you could actually see it.

The slope in the scanlines is very gentle and pretty much not noticable
unless you're looking for it, and maybe not even then. You may well look at
it and say "yeah, that's on a slope", but is that due to the scanning
process or because the deflection yoke is twisted slightly? There are so
many adjustments on a CRT that affect each other that getting a picture at
all is a minor miracle.

I don't miss CRTs.



[cctalk] Re: interlace [was: NTSC TV demodulator ]

2024-05-20 Thread Peter Corlett via cctalk
On Mon, May 20, 2024 at 11:13:38AM -0500, CAREY SCHUG via cctalk wrote:
[...]
> many games and entry pcs with old style tv analog format, don't interlace,
> and tube TVs nearly all (except maybe a few late model high end ones?) are
> fine with that, but I seem to recall that most or all digital/flat screen
> can't deal with non-interlace.

Flat panels sold as PC monitors tend to support a smaller range of video
timings than those sold as televisions. Any television which can't handle
non-interlaced 15kHz video should be returned to the shop as defective.

What you may however find is that while all TVs should support 15kHz video,
they sometimes artificially restrict the range of supported modes on a
per-input basis, purportedly for compatibility or ease-of-use or similar
marketing claptrap. Further, some models will offer a different feature set
based on the *name* you assigned to that input via the TV's menus.

So you may well find that your TV starts playing nicely with your 1980s
micros if you lie to it and claim that you've really connected a VHS
machine.



[cctalk] Re: BASIC

2024-05-03 Thread Peter Corlett via cctalk
On Fri, May 03, 2024 at 02:51:06AM +, Just Kant via cctalk wrote:
> BASICs available at bootup were nice, but really were only useful with 8
> bit micros. IBM ROM BASIC was hobbled until you ran BASICA from disk. And
> if you had a floppy it only made sense to buy a cheap compiler (Quick
> Basic, Turbo Basic, etc.). Whatever you were missing by not dropping
> 4-500$ for a full product probably wasn't worth the expense.

A bit of perspective: the equivalent of $400-500 (~£200-250) was a couple of
weeks salary in the UK at the time. Unless it could be written-off as a
business expense, the purchase of that "cheap" compiler just wasn't
happening.



[cctalk] Re: Z80 vs other microprocessors of the time.

2024-04-23 Thread Peter Corlett via cctalk
On Mon, Apr 22, 2024 at 01:06:42AM +0100, Peter Coghlan via cctalk wrote:
[...]
> This was implemented by a humble 6502 running at (mostly) 2MHz, with one 8
> bit arithmetic register, two 8 bit index registers, one 8 bit stack
> pointer, a 16 bit program counter and a few flag bits.

> I would have expected that a computers featuring a Z80 with its larger
> register set, 16 bit arithmetic operations, richer instruction set and
> general bells and whistles would have been able to produce a much superior
> implementation in terms of speed or features or both but I never came
> across one.

> Why is that? Did the Z80 take more cycles to implement it's more complex
> instructions? Is this an early example of RISC vs CISC?

Technically yes, but the implicit assumption in the question is wrong.

The Z80 takes three or four memory cycles to perform a memory access versus
the 6502 accessing memory on every cycle, but Z80 systems tend to be clocked
3-4 times faster so the memory bandwidth is pretty much the same. This
shouldn't be too surprising: they were designed to use the same RAM chips.

So the Z80 takes more cycles, but it was designed to use a faster clock and
do simpler operations per clock as that saved die space. Clock speeds have
*never* been a good way to compare CPUs.

In the hands of an expert in their respective instruction sets, both
architectures perform about as well as each other for a given memory
bandwidth (which was and still remains the limiting factor on CPUs without
caches). The 6502 could be said to "win" only in as much as the modern
drop-in replacement is the 14Mhz 65C02S, whereas the Z80's is the Z804C00xx
which tops out at 20MHz so is only equivalent to a ~5MHz 6502.

For the same reason, a 14MHz 65C02S will leave a 68000 (maximum 16.67Mhz) in
the dust, especially when working with byte-oriented data such as text where
the wider bus doesn't help. The 68000 takes four cycles to perform a memory
access, and inserts a couple of extra cycles of dead time for certain
addressing modes which require extra work in the address-generation
circuitry.

Even back in the day, it was noted that the Sinclair's ZX Spectrum with its
3.5MHz Z80 could outperform their later QL with its 7.5MHz 68008.



[cctalk] Re: Last Buy notification for Z80 (Z84C00 Product line)

2024-04-20 Thread Peter Corlett via cctalk
On Fri, Apr 19, 2024 at 09:34:42PM -0700, Chuck Guzis via cctalk wrote:
> On 4/19/24 19:39, ben via cctalk wrote:
[...]
>> Now is a good time to stock up for any z80 projects or repair, while they
>> are $10 or less on epay.

Unless people start panic-buying them, Z80 chips are likely to languish in
Mouser etc's warehouses for years. After all, Zilog wouldn't stop production
of something in high demand.

> I seem to remember that in the mid 80s, OEM quantity price for a Z80A was
> less than a buck a chip.

There's this thing called "inflation", which does tend to become somewhat
significant after four decades.

In the mid-80s, a pint of beer cost about 70 pence. I've escaped that
benighted island, but according to friends who were not so lucky it is now
now seven quid in London these days. That's enough to drive you to drink,
except, well...



[cctalk] Re: For Fred, especially: "Everything I know about floppy disks"

2023-09-10 Thread Peter Corlett via cctalk
On Sun, Sep 10, 2023 at 05:44:59PM +0100, Liam Proven via cctalk wrote:
[...]
> No idea of the CPU performance. 4MHz Z80A but whether there was any
> contention or anything I have no idea. I believe one of the
> interesting bits of the design is that there's no ROM at all. They
> came with a dedicated printer (as well as a Centronics port) and
> masked into a corner of the printer controller chip was a tiny bit of
> bootstrap code.

One unusual and interesting thing about the Amstrad PCW is how it uses a
display list system, unlike basically any other home computer of the era
apart from the Atari 8-bits and the Amiga. This lets it have a nice
high-resolution bitmap display and do *partial* scrolling quickly without
having to get the CPU to copy the whole screen around. This is obviously
useful for a word processor which mostly just inserts and delete lines and
makes the thing so much smoother to use.

Various of its contemporaries such as the Beeb, CPC or PC video cards used
or were inspired by the MC6845 CRTC that can only scroll the entire screen.
This is fine for a dumb terminal experience such as the MS-DOS shell, but
has to fall back to CPU rendering for partial scrolls.

You can see the difference on a Beeb or emulator. Go into MODE 1 and type in
any old junk such as "*HELP" to print a wodge of text. Towards the bottom,
do "COLOUR 129" to set the text background to red (not strictly necessary,
but it makes the changed area bigger and more obvious) and do that "*HELP"
or whatever to make it scroll. Watch how the screen scrolls quickly with no
tearing. That's using 6845 hardware scrolling. Now repeat this but do VDU
28,1,30,38,1 to set up a text viewport which is one character in from each
edge and do the same. Look how it scrolls slowly and tears. That's software
rendering making a pig's ear of it because there's just not enough memory
bandwidth to copy a screen's worth of memory within a 20ms frame.

This hardware scrolling support is more useful to the PCW than the CPU. If
it had a 68000 in it, but no display list, it'd be much slower at editing
text. Just look at wordprocessors on the original 128k Mac.



[cctalk] Re: Friden (was Silly question about S-100 and video monitors)

2023-09-02 Thread Peter Corlett via cctalk
On Fri, Sep 01, 2023 at 04:32:57PM -0600, ben via cctalk wrote:
[...]
> I think that way has been for a while. Having a hard time finding a 68B50
> on ebay. All the modern serial devices (I can buy) seem to be serial
> interfaced. Sigh.

I see the 68B50 on AliExpress, and they're probably even genuine. The vendor
I'm tempted to order some other retro chips from offers them in five packs
for about a euro each.

For new parts available from a reputable supplier, there's the W65C51. The
bumph notes it is "compatible with 65xx and 68xx microprocessors". Available
in a variety of packages including DIP, and also in -S and -N variants
depending on whether you want CMOS or TTL levels, it runs at a nominal 5V
and has speed grades up to 14MHz. It's not a direct replacement for the 6850
but will look quite familiar and present no surprises. For new designs, it's
simpler to use as it doesn't need external baud rate generators.

A single W65C51N6TPG-14 (DIP, TTL, 14MHz) is €7.10 from my local Mouser.

If you can handle SMD, there's even the venerable 16550 and clones which
could be handy if you're trying to do high-speed serial, although that's got
a more 8080-style bus interface so you'll need a few extra gates to get that
going.

> PS: Is it me or was the 6850 ACIA the only simple and bug free uart around
> at the time with interupts.

The W65C51 datasheet notes it has a bug with the flag bit indicating that
the transmit buffer is empty, and the recommended workarounds are "don't do
that" or "keep using the old NMOS 6551". I suggest the former since the
latter is probably once again harder to find.



[cctalk] Re: Apple 1

2023-08-04 Thread Peter Corlett via cctalk
On Fri, Aug 04, 2023 at 08:51:31AM -0500, John Herron via cctalk wrote:
[...]
> That price is interesting. Does that imply the value has gone down after
> some skyrocketed close to 1 million? One still has to make the decision of
> a owning a house or an apple 1.

Well, both of them are treated as speculative investments, putting them out
of reach of people who just want the pleasure of using them rather than
looking for the next bagholder. The main difference is that I can just buy
the parts to build my own Apple 1 and nobody's going to stop me, whereas if
I try that with a house the local authority gets quite upset.



[cctalk] Re: Greaseweazle part 2

2023-06-12 Thread Peter Corlett via cctalk
On Sun, Jun 11, 2023 at 09:21:52AM -0600, ben via cctalk wrote:
[...]
> I would of thought the AMIGA would have a say here, as it reads a disk
> track as just a bunch of flux transitions.

The Amiga has a choice of two fixed clock rates, both of which happen to
correspond with common DD disk formats of the day. A (digital) PLL is used
to nudge the frequency to synchronise with the incoming flux transitions
when reading, but that's to handle wow and flutter, not formats using a
different clock.

This is sufficiently-limiting that the Amiga cannot use standard HD drives,
but needs a special drive which drops to 150RPM when HD media is inserted so
that the clock speed remains the same.



[cctalk] Re: Getting floppy images to/from real floppy disks.

2023-05-27 Thread Peter Corlett via cctalk
On Thu, May 25, 2023 at 12:52:39PM +0100, Tony Duell via cctalk wrote:
[...]
> USB interfacing is hard, but SD cards are a lot simpler. So use a card
> reader thing to transfer the files to an SD card and design an
> interface for that to ISA bus.

No need even to design anything or faff around copying files between formats
as flash-to-IDE widgets are available from the likes of AliExpress for
peanuts. Your choice of flash: CF, SD, mSATA, NVMe, there are widgets for
them all and probably some I've missed. On the IDE side you've got the chose
of male or female, 40- or 44-pin IDE. (Watch out for that "50-pin IDE" that
some of the less knowledgable sellers have, unless you actually want SCSI.)

Given free choice, I'm more inclined to go SD-to-IDE. CF cards may well
emulate IDE devices and can use a passive adaptor and so feel more
appropriate to the task, but unless you've got a nice cache of old CF cards
which aren't knackered, the cost of new ones are prohibitive as they're
intended for professional applications such as high-end digital cameras. SD
cards are dirt cheap unless you're buying them by the terabyte (and not
unreasonably expensive even then).

For the specific case of a desktop PC, there are CF-to-IDE widgets mounted
on a card slot bracket so you can swap cards without opening the machine
(but powering-down is probably wise). These are completely passive and don't
go into a slot, so can be installed in any machine it'll physically fit;
ISA, PCIe, Zorro, whatever. Depending on clearances, it might even be able
to share the same slot with a plug-in IDE controller.

SD extension cables are a thing for smaller machines without card slots. I
have one in my Amiga 1000 as the ribbon cable can be routed through quite
small gaps in the case.



[cctalk] Re: ST-251 Data Recovery for Glenside Color Computer Club (GCCC)

2023-05-17 Thread Peter Corlett via cctalk
On Wed, May 17, 2023 at 12:02:05PM -0500, Mike Katz via cctalk wrote:
> That is because Amiga uses GCR recording rather then FM or MFM.

Nope. You may have gotten confused with the Commodore 64 drives, which
were very Special, or perhaps early Apple gear.

The Amiga's disk controller supports both GCR and MFM, but MFM was used by
default because it is higher-density and the blitter can be used to perform
MFM decoding. It can read and write PC disks just fine using a third-party
block device driver (one was later bought-in and shipped with Workbench),
but the native format uses a different sector scheme which gets 880kiB on a
DD disk instead of the usual 720kiB of the PC.

Said third-party device drivers are flexible enough that they'll handle
disks from other platforms which use PC-compatible disk controllers such as
the Atari ST, Acorn Archimedes, Sun workstations, and later Apple Macs.

The Amiga pretty much died before HD disks became standard, although some
Amiga-compatible HD drives exist and gave 1760kiB per disk. These were like
hen's teeth even back then. I have one and have never seen another.



[cctalk] Re: Knockoffs, was: Low cost logic analyzer

2023-03-15 Thread Peter Corlett via cctalk
On Tue, Mar 14, 2023 at 09:16:02PM +, Jonathan Chapman via cctalk wrote:
[...]
> It's nice to support the designers in some capacity, but buying knockoffs
> fuels the ecosystem that creates knockoffs. With our stuff, it's never
> been that a single knockoff operation eats our lunch, it's that there's a
> zillion of them that run maybe 100 boards and disappear. Death by a
> thousand cuts. They charge $1-5 less while running the cheapest possible
> boards, stuffing with salvaged chips, etc. Meanwhile, we're having to pay
> for runs of boards with hard gold plating and buy genuine parts from
> Mouser.

I'm not currently in the market for an XT-IDE--probably just as well as they
seem to be out of stock---but this sort of product appeals to me and I'd buy
one if I had an 8-bit ISA machine. $60 for the real deal is impulse-buy
territory, and risking a knock-off to save $1-5 isn't worth it. However, I'd
still much rather buy through AliExpress than the likes of Tindie, and it's
only partly about the sticker price:

AliExpress quotes an all-in price in euros including VAT and shipping, and
give a delivery deadline, typically 14-30 days away, although I can choose
to pay for a faster service. They take payment via iDeal and never see my
card number. I pay the quoted amount and no more. The package typically
arrives in the Netherlands within a fortnight, gets rubber-stamped through
customs because AliExpress have prepaid the VAT on my behalf, and lands on
my doorstep the following day. I know how long customs checks take because
the package has tracking and I get frequent status updates.

So: pay money and stuff turns up on time. This brings joy.

Tindie quote a VAT-exclusive price (since it does not handle VAT at all) and
shipping information is buried in their awful interface. AFAICT, it's $25
for the XT-IDE, or possibly that's just for a bare XT-IDE PCB, sent via an
USPS's cheapest untracked service which takes about a month to get to the
Netherlands. When it arrives, I'll be shaken down for some random amount of
VAT and handling fees before the package is released from customs, and this
adds a further week's delay even if I pay promptly.

So: pay a lot more money, then wait indefinitely before paying even more
money, then wait some more for stuff to turn up. The lack of tracking or
deadline causes worry that it will never turn up. This does not bring joy.

The greatest enemy of US small businesses is US business practices,
particularly when it comes to shipping. If you impose unnecessary extra
unpredictable expense and inconvenience which tells the rest of the world
you don't really want to do business with them, don't act surprised when the
rest of the world takes their custom elsewhere.



[cctalk] Re: Chatgpt : I had a retro dream

2023-02-07 Thread Peter Corlett via cctalk
On Tue, Feb 07, 2023 at 12:27:33PM +0100, Cedric Amand via cctalk wrote:
> I've looked at the problem a bit, there are two issues to solve at first
> glance ; (A) there doesn't seem to be a telnet server library in python,
> so whatever you do you have to write your own telnet server, which is a
> bit more than just handling the sockets and newlines

You're overthinking this. Go old-school and write it as a regular REPL
command which is spawned from inetd. You don't actually need to implement
TELNET protocol for this as any sane telnet(1) client has sensible defaults
in the absence of a protocol negotiation, which is just as well as a lot of
modern software gets confused if you send the TELNET escape code even though
the protocol requires it. (FTP clients are particularly shoddy in this
regard, as most of them think it's a weird kind of HTTP and get the framing
wrong on edge cases such as strings containing NUL and CR characters, and if
they were really phoning it in, SP and/or '+'.)

Only heavyweight and/or high-performance daemons need to roll everything by
hand. I did a funky whois server in Perl many moons ago for fun, but it was
quite unnecessary to do so and inetd would have been quite up to the load.

> and of course (B) this has to be linked to someone's API key, although we
> could simply release the code and have everyone run their own little
> gateway.

The latter would seem to be the best approach, and of course if you're using
inetd, tcpwrappers deals with the access-control issues.

> So, it's both simple and not simple. Maybe another language ( perl comes
> to mind - NetServer:Generic ) would be better suited to run a telnet
> server.

And I see NetServer::Generic was written by Charles Stross! However, what
with him having been distracted by a writing career resulting in dozens of
rather good books, it's not been maintained since 2000, and I see no
indication that it actually speaks TELNET protocol either. From what I
vaguely recall of Charlie's work in the 1990s dotcom boom, the thing he
would have written this for was machine-to-machine communication which
didn't involve TELNET.

These days, Perl users would probably reach for the more modern Net::Server
or one of its subclasses, or just glom a REPL onto inetd.



[cctalk] Re: USB Attached 5.25" drives?

2023-01-21 Thread Peter Corlett via cctalk
On Fri, Jan 20, 2023 at 07:55:50PM -0800, geneb via cctalk wrote:
[...]
> I'm surprised nobody has mentioned the AppleSauce yet. Yes, it requires a
> Mac. Yes, they're currently out of stock, but Yes, it's absolutely the
> best solution out there for disk imaging. https://applesaucefdc.com/

It's certainly priced as if they think it's the "best solution". But it's
also closed-source, driven by a weird GUI tool, and unavailable for at least
six months. Charging a whopping $70 for shipping and tossing it into the
regular international mail is just icing. So it's not even close to my idea
of "best" on any of those fronts, but hey, whatever works for you.

My objection to such a thing being closed-source isn't just ideological. If
I am trying to read a disk which has an unrecognised format or is so mangled
that the software throws up its arms, I'm completely out of luck adding
support to it myself.

Their response to Linux and Windows users is basically "run our software on
a pirate copy of MacOS in a VM". Yes, users of disk imaging tools are
unlikely to care much about violating copyright, but it's probably best to
not say the quiet part out loud here.



[cctalk] Re: AI applied to vintage interests

2023-01-17 Thread Peter Corlett via cctalk
On Tue, Jan 17, 2023 at 10:16:18AM +, Peter Coghlan via cctalk wrote:
[...]
> How about translating code from Z80 which has several registers to 6502
> with rather fewer? That would seem to need some more intelligent thinking
> on how to simulate the unavailable registers without causing additional
> difficulties.

It is often said that the 6502 has 256 registers, i.e. zero page.

So e.g. LD (HL), A could be mechanically transformed into the sequence LDX
#0, LDA (h, X), and STA a, with h and a being zero page locations. On the
65C02 the first two operations can be replaced with a simple LDA (h),
although it may still be useful to index via X to simulate EXX without
performing an expensive copy.

As it stands, that replaces a one byte instruction with five byte sequence
which is obviously not great, but a relatively simple peephole optimiser can
eliminate many of the redundant loads and stores so it wouldn't be quite so
bad. After all, one important source of stores is the flags register, which
I ignored in the code fragment. A _good_ optimiser can do a lot of clever
analysis and transformation, and would probably be needed to handle all of
the edge cases well, but would be too large and CPU-intensive to run on a
Z80 or 6502 system.

It'd be easier to bodge a Z80 into a 6502 machine than try and translate the
code. That's what often happened back in the day, after all.



[cctalk] Re: long lived media (Was: Damage to CD-R from CD Sleeve

2023-01-17 Thread Peter Corlett via cctalk
On Tue, Jan 17, 2023 at 05:42:55AM +, Chris via cctalk wrote:
[...]
> The only answer that anyone can provide is redundancy. Keep 2 or 3 copies
> of everything on seperate external drives. Every 3 to 5 years buy new
> drives and transfer the data to them. Or just run checkdisk twice a year
> and wait for 1 drive to start popping errors. Replace it. Wait for other
> to fail. Then replace it.

If you mean CHKDSK.EXE, it's broadly equivalent to Unix fsck plus a surface
scan, and all fsck does is check and repair filesystem _metadata_. If the
metadata is corrupt then that's a good sign that the data itself is also
toast, but a successful verification of the metadata does not tell you
anything useful about the data itself.

The surface scan asks the drive to read each sector, and relies on the disk
correctly identifying sectors which have changed from when they were
written. This is almost always the case, but that "< 1 in 10¹⁴" in the
datasheet is still not zero. And that's before we consider dodgy SATA cables
and buggy disk controllers. (SAS won't save you either: what it gives in
increased quality, it takes away in extra complexity.)

On typical Windows desktop computers, the probability that something else
will go wrong and destroy the system is way higher than the raw error rate
of the disk, but on non-toy systems with many tens or hundreds of terabytes
of data, the probability of a disk lying rises uncomfortably-close to 1. A
good filesystem needs to defend against disks which do that. FAT and NTFS
are not good filesystems by that measure.



[cctalk] Re: Pertec controller; was: anybody need 1/2" tape drives?

2022-12-01 Thread Peter Corlett via cctalk
On Wed, Nov 30, 2022 at 08:10:27PM -0500, Paul Koning via cctalk wrote:
[...]
> 5V tolerant does not mean 5V compatible. I have right now some 5V devices
> I want to control, and it's not exactly clear whether a 3.3V device will
> drive outputs high enough to reliably make 5V devices see them as high.
> Arduinos can be had in actual 5V models (5V power, standard 5V logic
> levels in and out). Not the fast ARM ones but for many purposes good
> enough.

There's no single "standard 5V logic levels". The usual comparison is
between TTL and CMOS of course, but there are also the subfamilies which are
mostly compatible, right up until they aren't.

Anyway, you can usually drive a TTL(-compatible) input from a 3.3V output
because TTL treats anything above 2V as a logic 1. The "usually" caveat is
because there are pathological devices out there, so check those datasheets
to see if they're compatible. In particular, some microcontrollers can only
source limited current and some rare old TTL devices are quite thirsty. (At
least they won't get rarer if you try this, as you'll burn out the
microcontroller first.)



[cctalk] Re: Inline Serial Device?

2022-11-12 Thread Peter Corlett via cctalk
On Sat, Nov 12, 2022 at 10:28:09AM +, Tony Duell via cctalk wrote:
[...]
> The other day I saw a product with a flashing LED, the flash rate was set
> with a knob. Yes, a microcontroller with a pot connected to an analogue
> input and LED hung off an output port. This is the sort of thing I'd do
> with a couple of transistors or an NE555 depending on which turned up in
> the junk box first.

Farnell Nederland is quoting me €1.06 (+21% VAT) for the cheapest brand of
555 in stock. Their search won't let me find the cheapest microcontroller
without drilling down further, but an 8 pin AVR is €0.88. That's single item
quantities in DIP packaging, as is typical for small home projects. The 555
will also need a capacitor for its RC timer circuit which is another few
tens of cents. And that's why people use microcontrollers to blink LEDs.

The MCU in the Pi Pico is also well under a euro if you buy a reel of 3,400
of them. That's probably a few too many for an average hobbyist :)



[cctalk] Re: Bubble Memory

2022-10-21 Thread Peter Corlett via cctalk
On Fri, Oct 21, 2022 at 12:15:02PM +1100, Doug Jackson via cctalk wrote:
[...]
> Yet another American seler who doesn't understand how simple overseas
> shipping is.

As far as I can tell, the price to ship anything overseas from the USA is
twice the value of the item, plus fifty bucks, plus ten bucks per ounce.
Whether USPS actually charge this much or it's just sellers trying it on, I
neither know nor care.

So I don't bother even looking at American sellers any more.



[cctalk] Re: Fwd: Philips P2000C carrying strap

2022-09-30 Thread Peter Corlett via cctalk
On Sun, Sep 25, 2022 at 10:39:46AM -0500, Adrian Stoness via cctalk wrote:
> On Sun, Sep 25, 2022 at 10:26 AM Tony Duell via cctalk <
> cctalk@classiccmp.org> wrote:
>> Does anyone have a Philips P2000C CP/M luggable with the carrying strap?
>> I will be restoring such a machine in the near-ish future and mine is
>> lacking the strap. Clear photos of the end fittings that slot into the
>> machine, the dimensions of them, etc would be a great help in making
>> something up.

> get ahold of the phillips radio museum in holland they might have photos?
> they have some of the computers on display

Note that there are (at least) _two_ Philips museums: the "Stichting tot
Behoud van Historische Philips Producten" (Foundation for the Preservation
of Historic Philips Products) and the Philips Museum. Their websites are
https://www.sbhp.nl/ and https://www.philips-museum.com/. Both are in
Eindhoven, as is much of the interesting bits of Philips itself.

The former appears to be volunteer collectors of mainly analogue-era Philips
gear and I can almost smell the chain-smoked roll-ups just from the photos,
whereas the latter looks rather more corporate.

(I am occasionally contacted by Philips' recruiters trying to lure me to
work at some nasty industrial park near Eindhoven airport. There is usually
tumbleweed after I point out the seven hour commute and ask if they've
considered remote-working.)

I only note this because I have Weekend Vrij and a Museumkaart, and my
random spin for where to visit this weekend landed on Eindhoven and thence
to the Philips Museum, which reminded me of this thread. Unfortunatly, SBHP
is closed at weekends (and doesn't accept Museumkaart, but I could have
probably scraped together the €4 entry fee) which is a shame as it looks by
far the more interesting of the two. If I spot a P2000C and remember, I'll
try and get a photo although I doubt they'll let me dig it out of the
cabinet and go over it with my micrometer...

It may also be worth reaching out to the HomeComputerMuseum (sic) in Helmond
(https://www.homecomputermuseum.nl/) who are quite friendly and have a
well-curated collection, including quite a lot of Philips gear. It's not
directly relevant to this query, but they have a very impressive collection
of CD-i machines, hardware prototypes, and media. They have a P2000C, which
is on display for the public to use, and a suitable donation would probably
get all the photos and measurements you want:
https://www.homecomputermuseum.nl/en/collectie/philips/philips-p2000c/

As a last resort, there's the Bonami Games and Computers Museum in Zwolle
(https://computermuseum.nl/) although it's basically just a huge barn with a
load of random stuff piled in it and poor labelling, so I'd try them last: I
took some lovely atmospheric pictures of 60s and 70s Big Iron when I
visited, but have no idea what half of it is. I suspect they don't know
either.



[cctalk] Re: Cell phone as a dial up modem.

2022-08-11 Thread Peter Corlett via cctalk
On Wed, Aug 10, 2022 at 11:53:34PM -0600, Grant Taylor via cctalk wrote:
> Does anyone know if it's possible, or -- better -- have experience using a
> cell phone as a dial up modem?

I did it routinely in the late 1990s and early 2000s. I stopped once I got a
GPRS-capable handset, since that was much cheaper to run: 1MB of data cost
the same as a one minute call (which could shift 144kB at best, although
30-40kB was more typical) on Vodafone UK in the late 2000s. By coincidence,
this is still the case for me now on Lebara NL, but the prices are *much*
lower.

My phone used a relatively obscure corner of the GSM standard known as CSD
(Circuit Switched Data) which was essentially implemented as a flag set by
the handset to tell the base station that it should use a dialup modem codec
instead of the GSM voice codec for this call. CSD would only use a single
GSM timeslot and was limited to 9600 bps, but HSCSD (High Speed Circuit
Switched Data) could use multiple slots and telcos usually charged a lot
more for such calls.

So far, so good and It Just Works(TM), right? Unfortunately, there are
confounding factors:

* I suspect you're in the USA, which used NIH instead of GSM, and what it
  supported and still supports is any guess.

* While GSM networks still exist, they've been pared down somewhat to make
  space for 4G and coverage can be patchy or suffer from congestion. 4G has
  *no* native support for phone calls: handsets either fall back to GSM
  (like my "new" iPhone SE which I had to buy after KPN turned off its 3G
  network) or 3G, or tunnel voice calls over VoIP.

* Wikipedia also notes that "After 2010 many telecommunication carriers
  dropped support for CSD, and CSD has been superseded by GPRS and EDGE
  (E-GPRS)." Except of course that GPRS and EDGE are packet-switched
  services and no call is made, so it is not a direct substitute.

* Modern handsets may not give you sufficient access to the cellular modem's
  serial port to send the appropriate AT commands to configure and make a
  (HS)CSD call.

On the upside, 2G is mainly being kept around for the benefit of millions of
embedded devices which have 2G modems, so I suspect CSD is still supported
by extant 2G networks despite Wikipedia's claim to the contrary. I have not
tested this hypothesis.

So, just come to Europe and use an embedded GSM module instead of a whole
phone :)



Re: Cctalk subscription disabled

2022-05-11 Thread Peter Corlett via cctalk
On Wed, May 11, 2022 at 07:14:23AM -0400, Bill Degnan via cctalk wrote:
> I get these every so often despite my gmail account. I believe when you're
> on a thread that has an email address within in it that gets flagged, all
> associated emails are also.flagged, based on how the reply all setting is
> used. My *guess*

No, you get them *because* of your GMail account. GMail is a walled-garden
messaging platform which only grudgingly and intermittently exchanges mail
with standards-compliant Internet mail services.

Google is the AOL of the 21st century.



Re: interesting DEC Pro stuff on eBay

2022-04-22 Thread Peter Corlett via cctalk
On Thu, Apr 21, 2022 at 07:12:07PM -0400, Sean Conner via cctalk wrote:
[...]
> Agree here. I loved the 68K and have fond memories of writing programs in
> it. But while the x86 has been Frankensteined into 64 bits, I don't think
> I can see the 68K ever being a 64-bit architecture. I don't think there
> are enough unused bits in the instruction formatting for that.

The Apollo 68080 core claims to have 64 bit registers, but the whole thing
is woefully underdocumented. If there's documentation of its instruction
encodings beyond the vasm source code, I can't find it.



Re: Origin of "partition" in storage devices

2022-01-31 Thread Peter Corlett via cctalk
On Mon, Jan 31, 2022 at 07:51:28PM -0500, Paul Koning via cctalk wrote:
[...]
> Yes, RT-11 is a somewhat unusual file system in that it doesn't just
> support contiguous files -- it supports ONLY contiguous files. That makes
> for a very small and very fast file system.

> The only other example I know of that does this is the PLATO file system.

The first such one I encountered was Acorn's DFS. "Very small" is relative:
the filesystem came on an 16kiB ROM and stole 2,816 precious bytes of RAM
for its workspace. When you already only had 28,160 bytes free after the OS
had taken its fill, that's quite an extra imposition.

Other filesystems like this are anything which is expected to be generated
from a master image and written out linearly to mostly read-only storage,
such as ISO 9660, UDF, embedded systems, "booter" games and demos, etc.



Re: Origin of "partition" in storage devices

2022-01-31 Thread Peter Corlett via cctalk
On Mon, Jan 31, 2022 at 02:21:19PM -0800, Fred Cisin via cctalk wrote:
[...]
> That limit lasted until MS-DOS 3.31 / PC-DOS 4.00 After that, the limit
> was bumped up to 2GB. (Probably would have been 4GB if they had used an
> UNSIGNED 32 bit number, and given up the option of having negative file
> and drive sizes)

Fun factoid: that 2GiB limit on *filesystem* size is actually due to the use
of an signed *eight* bit field, namely the number of sectors in a cluster.
This limits clusters to 64 sectors, or 32kiB. FAT16 supports a shade under
2**16 clusters, resulting in that 2GiB limit.

FAT's *file* size limitation is indeed due to a 32 bit field. The ISO 9660
standard offers an "interesting" solution to that, namely having multiple
directory entries for the same filename. So if you want to store files
larger than 4GiB on a CD-ROM, the filesystem won't hold you back.



Re: AOL diskettes

2022-01-19 Thread Peter Corlett via cctalk
On Tue, Jan 18, 2022 at 04:47:36PM -0700, Grant Taylor via cctalk wrote:
> On 1/18/22 2:21 PM, Peter Coghlan via cctalk wrote:
>> https://www.thisiswhyimbroke.com/floppy-disk-table/
> I like it!

The ratios are wrong: it's about twice as thick as it ought to be. It's
apparently been designed by somebody who has seen a picture of a floppy but
never used one.

> But I hate the price.

Get a LACK table from IKEA (€6.99) and adorn it with 3D prints, or even old
floppies? I'd advise against trying to machine patterns into it though as
that low-end stuff is basically made of laminated cheese.



Re: VAX 780 on eBay

2022-01-03 Thread Peter Corlett via cctalk
On Sun, Jan 02, 2022 at 06:59:47PM -0700, ben via cctalk wrote:
> On 2022-01-02 6:28 p.m., Zane Healy via cctalk wrote:
[...]
>> On that note a Raspberry Pi 2b running SIMH/VAX is about 1.6 VUPS.
> But can the Pi handle a gazillion students all time sharing at once @
> 2400? How long was the VAX timesharing era as I suspect networked PC's
> come out soon after that.

Probably about as well as the actual VAXen back when I was failing my
degree. I remember trying (IIRC) "SET PRI" once to lower my session priority
one notch, and while I did eventually get a DCL prompt back, it was
basically unusable. It was hardly a speed demon even at default priority,
and I returned to my Amiga to get anything useful done.

Raspberry Pis have between zero and six serial ports, depending on model,
amount of faffing about, and whether you consider three-wire LVTTL to be a
serial port or demand a full seven-wire RS232 port. Beyond that, you're
going to have to multiplex connections over some other port, with Ethernet
being by far the most obvious choice.

Meanwhile, back at the university in the mists of time, they had rooms full
of dumb terminals, mostly VT220s but some VT420s, connected to an X.25 PAD
which then winged its way to the VAX cluster, and indeed onwards to JANET,
so it's not "cheating" to multiplex over a packed-switched WAN connection on
the Pi. I expect the VAX's directly-connected serial port(s) would have been
reserved for the operator console.

The Pi 4 is between three and ten times faster than the Pi 2, depending on
task. I'd expect SIMH to fall towards the bottom of that range, but 5 VUPs
isn't to be sniffed at, especially as that is presumably a per-core
limitation and you could run a cluster of four of them. On a device which
draws ~7W at full tilt.



Re: Women of Computing

2021-12-04 Thread Peter Corlett via cctalk
On Sat, Dec 04, 2021 at 06:20:33PM -, Chris Long via cctalk wrote:
> Great.not.
> 
> Why do we need woke Lego?

To annoy people who use dogwhistles.



Re: Apple cube cleaning

2021-11-18 Thread Peter Corlett via cctalk
On Thu, Nov 18, 2021 at 01:09:34PM -0500, Paul Koning via cctalk wrote:
> On Nov 18, 2021, at 11:59 AM, Peter Corlett via cctalk 
>  wrote:
>> I assume you've already attempted to throw the usual household stuff at it
>> as if it was a phone or TV. If not, dig out the glass cleaner and microfibre
>> cloths and get onto that.
>> 
>> In addition to the plausible suggestion elsethread of WD-40, I'd also give
>> it a gentle waft of a hairdryer (or heat gun set suitably-low) to try and
>> melt the glue without melting the plastic, and then it may be more amenable
>> to wiping-off.

... and then you snipped the following paragraph where I pointed out the
case may be acrylic rather than polycarbonate, which is kind of important
given this:

> One other possibility is ethanol. But be sure to test that first, because
> it WILL damage plexiglas ("lucite"). That's about the only plastic it
> hurts, though.

Plexiglas and Lucite are brand names for acrylic.



Re: Apple cube cleaning

2021-11-18 Thread Peter Corlett via cctalk
On Thu, Nov 18, 2021 at 03:34:19PM +0100, jacob--- via cctalk wrote:
> I got a Apple cube here as part of a larger haul, at some point someone
> placed a bit of tape on the clear polycarbonate case, the tape is long
> gone but the yellow glue remains.

> Am unsure about the hardness of it, if I could use sugar cubes to rub it
> off, anyone knows a scratch free way to get it off ?

I assume you've already attempted to throw the usual household stuff at it
as if it was a phone or TV. If not, dig out the glass cleaner and microfibre
cloths and get onto that.

In addition to the plausible suggestion elsethread of WD-40, I'd also give
it a gentle waft of a hairdryer (or heat gun set suitably-low) to try and
melt the glue without melting the plastic, and then it may be more amenable
to wiping-off.

Brasso, T-Cut, or similar come in useful if you manage to scratch it anyway.
Test your technique on some matching scrap plastic first before moving on to
the case. You say "polycarbonate" but Wikipedia says "acrylic", and these
have quite different mechanical properties so you should verify which
plastic the case is actually made from. Maybe the resin code will tell you,
but I suspect it'll just say 7 (other).

If all else fails, style it out by sticking your own label over it :)



Re: Linux and the 'clssic' computing world

2021-10-25 Thread Peter Corlett via cctalk
On Mon, Oct 25, 2021 at 10:18:51AM +0200, Sijmen J. Mulder via cctalk wrote:
[...]
> It's especially frustrating when, after having put in the work, projects
> refuse even trivial patches for Solaris and derrivatives or sometimes even
> BSDs because 'who uses that anyway'. (I include the patches in pkgsrc
> instead.)

Solaris is owned by Oracle, a bunch of litigious bastards who readily
freeload off Linux and other Open Source projects but are rather reluctant
to give back much beyond gateway drugs to their closed-source offerings. I
note the existence of CDDL which appears deliberately designed to clash with
the GPL. That sort of thing can leave a nasty taste in the mouth.

The specific details differ, but this basically also applies to Microsoft
and Windows.

Anyway, this hypothetical patch submitter has apparently put in minimal
effort ("trivial patches") and now implicitly expects the project maintainer
to integrate it immediately, and then do the thankless task of maintaining
and testing it indefinitely on (multiple releases of) a closed-source
platform which is actively hostile to their work. For free, presumably.

To a rough approximation, nobody uses Solaris. It's not a good use of any
developer's time to support it unless they're being paid to do so.



Re: An American perspective on the late great Sir Clive Sinclair, from Fast Company

2021-09-28 Thread Peter Corlett via cctalk
On Mon, Sep 27, 2021 at 01:14:54PM -0700, Yeechang Lee via cctech wrote:
> Liam Proven says:
[...]
>> If you were going to spend as much as a new car on an early home
>> computer,
> If you're going to exaggerate for effect, don't exaggerate so much that
> your meaning is lost.

I went and looked up the numbers. A 1983 Fiat Panda was £3k (list). At the
same time, the C64 was selling for £345. So it's an order-of-magnitude out,
but still a formidable sum of money: a factory-new rustbucket (e.g. Renault
Duster) is about €10k today and I wouldn't willingly drop €1k on a machine
with similar deficiencies to the C64.

Any Brit lucky enough to have £345 burning a hole in their pocket in 1983
would have more likely gotten a BBC Micro for £399. The Beeb had less memory
and the graphics and sound were less useful for games, but it had a faster
CPU (2MHz uncontended), much better BASIC, higher-resolution graphics, and
was generally a rather more well-rounded and serious machine.

Once you were doing useful things on the Beeb, a dual disk drive and decent
monitor would beckon, at which point the price quickly creeps upwards to
that of a second-hand car.



Re: Linux and the 'clssic' computing world

2021-09-28 Thread Peter Corlett via cctalk
On Mon, Sep 27, 2021 at 09:55:08AM -0400, Murray McCullough via cctalk wrote:
> [...] WIN 11 is much more secure than previous Windows versions. [...]

Windows 11 hasn't even been released yet, so this cannot be known. Any
claims of "much more secure" comes from press releases and other marketing
materials. Microsoft don't exactly have a good reputation for accuracy and
honesty here.

However, it would have to try quite hard to be *less* secure than previous
versions of Windows. Unless one actually needs to run Windows, the
comparison should be against other platforms which can also do the job, and
not just whatever garbage Microsoft churned out last time.



Re: VAX4000 VLC diagnostics/console

2021-09-05 Thread Peter Corlett via cctalk
On Sat, Sep 04, 2021 at 09:34:30AM -0400, emanuel stiebler via cctalk wrote:
> On 2021-09-04 08:30, Antonio Carlini via cctalk wrote:
>> "Digital Diggings" couldn't get BlueSCSI to work on either VAX or Alpha:
>> https://www.youtube.com/watch?v=zFEh7owqHxU=36s. That's a pity as it's
>> much cheaper than SCSI2SD.

Apparently not so much cheaper any more, since it's based on the dirt-cheap
"Bluepill" SBC which has basically vanished off the market, at least on this
side of the Pond. They seemed to be mainly distributed via AZ-Delivery in
Germany, who are out of stock. I guess they're waiting for the same boat
from China that everybody else is.

The UK-based vendor of the BlueSCSI has had to hike the price to cover the
cost of getting hold of the remaining stock of Bluepills, which must be even
trickier than if they were here in the EU where I also can't just click a
"buy again" button and have a load fall through the letterbox next-day.

The only reason to have ever cared about the Bluepill was that it was so
cheap that one could gloss over its major design flaws which made it
unsuitable for a lot of projects. Now we have the Raspberry Pi Pico which is
also still rather cheap, can actually be bought, and is much more powerful
and much less buggy, the Bluepill is fairly moot.

> OK guys, but please compare that to costs for SCSI drives (please 6 of
> them, as you have partitions on the SDCARD), cost of SCSI controllers
> (QBUS/UNIBUS anyone?), or even IDE drives. So this is whining on a pretty
> high level, and there is no noise, so you can keep your machines working.
> (and *** very easy backup too) So, yes, there probably could be cheaper,
> but the guy spent a lot of time making it working.

It certainly looks like a more robust product than the BlueSCSI (not least
because it's not Bluepill-based), but it appears to only ship from Canada,
Australia, or the UK. The Australian distributor only takes PayPal so that's
an immediate hard no. The others seem to think that postage charges are a
trade secret which will only be revealed after committing to the sale, but
they do at least admit that they just throw it into the regular postal
sevice and hope for the best. So that's €30-50 for postage, at least a
month's wait if it arrives at all, plus a random courier surcharge due to
the sender inevitably screwing up the customs and VAT paperwork (assuming
they even bothered). Sod that, I'll make do without.

Breaking news for American businesses looking to sell into Europe: the UK
has left the EU, British exports have cratered due to red tape, its haulage
networks have all but collapsed, and a civil war is brewing which will
probably end in a break-up of the Union. You might want to pick a
distributor somewhere saner.

The people selling both the BlueSCSI and SCSI2SD do need to understand that
designing and building a product is only part of the job. If the ordering
process is broken, and fulfilment is slow, expensive, and stressful, this
drives away customers no matter how great the product is. If they just want
to make a bit on the side to fund their hobby, that's just fine, but perhaps
don't play-act at being a manufacturing business with global distribution.

I note that the guidance I get from the Dutch tax authorities points out in
that matter-of-fact Dutch way that they can tell the difference between a
genuine business and a hobby hoping for a tax break.



Re: Ultrix-11

2021-08-26 Thread Peter Corlett via cctalk
On Wed, Aug 25, 2021 at 12:04:34PM -0400, Douglas Taylor via cctalk wrote:
[...]
> In the video on youtube and in my experience the screen formating codes
> seem to be incorrect.  You can see this in the video when a man page is
> brought up.  The bolding does not occur.  I get the same result after
> installing.  The same with vi, it doesn't work in the video and doesn't
> work after installation.  I've tried Teraterm, putty, xterm all with the
> same result.  Haven't tried an actual terminal yet.  What was your
> experience?

Terminal styling control codes are hit-and-miss even when exclusively using
modern tools. These days, I pretty much exclusively use iTerm2 as my
terminal emulator, which has a bewildering array of compatibility-tweaking
controls to fiddle with, because everything seems to interpret the alleged
standards differently.

When I was relatively new to Linux I just put teminal oddities down to me
not knowing what I was doing and configuring it wrong, but then had the
opportunity to connect a real VT100 to it. "export TERM=vt100" is all that's
needed, right? There were *loads* of rendering errors, and I got my first
lesson into how well-tested Linux's termcap/terminfo database was.

Fast-forward a quarter-century and our terminal emulators are expected to
handle Unicode, which brings variable-width characters to our fixed-grid
terminal emulators, yet not break too badly if the endpoint is not
Unicode-aware and sends something like Latin-1 instead.

Bold and so on are set via SGR ("Select Graphic Rendition") sequences, and
Wikipedia gives a summary at
.
Here's a quick bash one-liner to display them on your terminal:

for i in $(seq 1 127) ; do printf '\033[%dm SGR %d \033[0m\n' $i $i ; done

(Progressively reduce that 127 if your terminal doesn't have scrollback and
you can't see the earlier entries.)

My *terminal* (i.e. iTerm2) supports 1-5, 7, 9, 30-39, 40-49, 90-97 and
100-107, i.e. bold, dim, italic, underline, blink, inverse, strikethrough,
and all of the colours.

However, a lot of useful software includes its own nested terminal emulator,
and support is less good: connecting to a remote server using mosh(1) loses
dim and strikethrough; tmux(1) turns italic into inverse, except on FreeBSD
where it also loses dim, blink and strikethrough and mysteriously gives me
another underline at 21 (probably due to it knowing about double-underline,
but doing a substitution for the benefit of my terminal which doesn't). And
if I use "watch -dc" to run a command repeatedly and highlight the changes,
it only supports the 8 basic colours.

If you test your own systems you may well come up with different results,
because this nested emulation relies on termcap/terminfo databases knowing
about the full capabilities of your terminal (MacOS doesn't include
sitm/ritm for italic, for example) and TERM being set correctly at each
level of nesting, so good luck with that.



Re: Extremely CISC instructions

2021-08-24 Thread Peter Corlett via cctalk
On Tue, Aug 24, 2021 at 08:47:33AM -0500, John Foust via cctalk wrote:
> At 04:13 AM 8/24/2021, Peter Corlett via cctalk wrote:
>> move.b ([0x12345678, %pc, %d0.w*8], 0x9abcdef0), ([0x87654321, %sp], %a0*4, 
>> 0x0fedcba9)
> And which language and compiler case was this aimed at?  

I have no idea and dread to think, although I chose a worst-case example
which doesn't actually make much sense.

Those scary-looking double-indirections in the instructions are just the
result of a generalised EA calculation mode which combines an inner offset
(which may be 0), base register, optional shifted index register, optional
indirection (before or after indexing), and an outer offset (which may also
be 0), to which the MOVE instruction itself adds indirection.

Combining a few of these is normal even on RISC machines: PC-plus-offset and
SP-plus-offset are used to get constants and stack-based variables. Adding a
shifted index is useful for array lookups. If it's an array of pointers,
indirection makes sense. And finally, if it's a pointer to a structure and
we want something other than the first field, a constant needs adding. So
it's easy to see how one can occasionally end up using this addressing mode
and enabling much of its functionality.

I suspect this addressing mode is used much more often with
address-calculating instructions such as LEA than those which operate
directly on the address like MOVE. If the 68000 didn't have a split register
file, this could also be (mis)used like it is on x86 to do cheap arithmetic
and multiplication by 3, 5, and 9.

> Wasn't that a primary driver for complex CISC instructions? That if it
> happened often enough, it would be faster or smaller as a single
> instruction?

You can get a lot of simpler m68k instructions in 22 bytes :)

>> (Yet Amiga owners used to poke fun at PC owners with their excessively
>> complex x86, which has simpler addressing modes.)
> I dunno, 68000 seemed like PDP-11 to me, and I often say one of the big
> reasons I quit a particular job was the prospect of my role changing to
> having to write 80x86 assembler all day.

I encountered the 68000 first, and when I eventually saw the PDP-11
instruction coding a few decades later, the lineage seemed obvious.



Re: Extremely CISC instructions

2021-08-24 Thread Peter Corlett via cctalk
On Tue, Aug 24, 2021 at 01:38:33AM +0100, Tom Stepleton via cctalk wrote:
> For the sake of illustration to folks who are not necessarily used to
> thinking about what computers do at the machine code level, I'm interested
> in collecting examples of single instructions for any CPU architecture
> that are unusually prolific in one way or another. [...]

The one that immediately comes to mind is the kitchen-sink MOVE in the 68020
which comes in at 22 bytes: 2 for the instruction itself, then two full-size
"full extension words" (one each for the source and destination) which
themselves are 2 for the flags, and 4 each for the base and outer
displacement.

So that'd be something like:

move.b ([0x12345678, %pc, %d0.w*8], 0x9abcdef0), ([0x87654321, %sp], %a0*4, 
0x0fedcba9)

Which does something like this:

* Compute 0x12345678 + %pc + %d0.w * 8
* Fetch 4 bytes from that address and add 0x9abcdef0 to that.
* Fetch a byte from that address.
* Compute 0x87654321 + %sp
* Fetch 4 bytes from that address and add %a0*4 + 0x0fedcba9 to that.
* Store the previously-fetched byte to that address.

(Yet Amiga owners used to poke fun at PC owners with their excessively
complex x86, which has simpler addressing modes.)

Unsurprisingly, when NXP tried to make m68k a bit more RISCy so it could go
faster and compete with ARM in the embedded sphere, one of the things which
were tossed out were these mad addressing modes, along with any other
encoding which took more than 6 bytes.



Re: WTB: Amiga 3000 front/floppy

2021-08-16 Thread Peter Corlett via cctalk
On Mon, Aug 16, 2021 at 11:10:47AM -0400, Ethan O'Toole via cctalk wrote:
> Scored an A3000. Prior owner cut a hole where the floppy goes and mounted
> a PC floppy in there. Looking for an original front plate and the matching
> floppy drive to restore machine to original look.

Those funky 150RPM Amiga HD drives are made from unicorns. I have one. I may
have even seen another at some point in the last 30 years, but couldn't say
for sure.

If you don't need HD support, I suggest just taking the front off that PC
drive and 3D-printing a plausible-looking fascia for it.



Re: Linearizing PDF scans

2021-08-15 Thread Peter Corlett via cctalk
On Sun, Aug 15, 2021 at 01:29:37AM -0400, J. David Bryan via cctalk wrote:
> On Sunday, August 15, 2021 at 12:55, Kevin Parker wrote:
[...]
>> ...but on my limited understanding it required support from the web
>> server to actually give effect to this.
> I believe that's right. At least all of the servers I used seemed to
> support this option.

The option in question is called "range requests" and is documented in the
original HTTP/1.1 standard from way back in 1997. Any web server worth its
salt should support it automatically when serving static files. It's used
for resuming downloads, for example.

[...]
> Assuming one only looks at a few pages, it would certainly reduce the
> amount of data served, though, of course, if one requested the entire
> file, it would actually be a slight disadvantage.

A larger disadvantage is the pause to download the next page when leafing
through a PDF, which can be quite distracting to people who can read without
moving their lips.

[...]
> As you say, it requires server support, and to be honest I've not checked
> recently to see if servers bother byte-serving anymore. [...]

If a server lacks support for range requests, it is either very old or a
small hobby project, and shouldn't be let anywhere near the public Internet.



Re: Install Floppies (Was: Compaq Deskpro boards/hard drives from

2021-07-26 Thread Peter Corlett via cctalk
On Sun, Jul 25, 2021 at 11:46:17AM -0600, Grant Taylor via cctalk wrote:
> On 7/24/21 10:26 PM, Chuck Guzis via cctalk wrote:
>> My recollection of the DMF Microsoft period was that if you purchased a
>> retail MS product using the DMF format and couldn't get it read on your
>> system, a call to MS would result in a standard format copy being shipped.

> It's my understanding that The DMF disks that Microsoft (and comparable from
> IBM with PC-DOS) used a different non-FAT file system which took up less space
> on the disk, thus yielding more storage for data. But that they both fit on
> the same /standard/ ""1.44 MB disks.

> I also seem to recall that Macintosh's could get 1.7 MB on the same ""1.44 MB
> disks.

HD disks can hold "up to" 2MB (12,500 bytes per track, times two sides, times 80
tracks), as printed on some of the more misleadingly-labelled brands. However,
splitting that into sectors and adding guard bands reduces the usable space.
Similarly, DD disks are "up to" 1MB.

When writing, PC-style disk controllers scan for the appropriate sector header
then switch to write mode to overwrite the old sector data. This requires guard
bands between sectors and sector headers. The PC's standard of 1,440kiB seems
have particularly generous guard bands, possibly to account for really shoddy
old systems which may be slow at switching modes and/or whose drives are
spinning a bit fast.

The Amiga could get 880kiB on a DD disk, and 1760kiB on a HD disk if you have
one of those hen's teeth drives which spin at 150RPM. It does this by doing a
read-modify-reformat of the entire track of 11 or 22 sectors, which allows
omitting all of the guard bands except for the one between the start and end of
the track. The hardware could do the mode-switch thing, but I'm not sure that it
saw much use, if any.

There was a third-party device driver for the Amiga which took out some of the
unused label areas in Amiga disk sector headers, and squeezed 12 or 24 sectors
per track. It could also optionally go up to track 83, giving 1,032,192 or
2,064,384 bytes per disk, although that's kind of risky.

The DMF format presumably also takes the approach that if the disk isn't
intended to be written to by random drives, they can tighten the guard bands
somewhat. I'm surprised that they went with 21 sectors per track when 22 is
clearly possible. Perhaps it was a hedge against people writing to them anyway,
or machines being unable to read them.

These figures assume MFM encoding. Halve it for FM encoding, with a hard upper
limit for double for fancier schemes. My back-of-beermat suggests 2,560kiB is
plausible for HD disks on the Amiga or similarly-flexible third-party controller
for the PC.

> But I could be completely wrong.

Apple had tighter control over their platform, so could tweak the timings to
increase the available space for data. I don't know whether they did: the only
time I've used a floppy on a Mac is to interoperate with a PC so it had to use
the lowest common denominator.



Re: Compaq Deskpro boards/hard drives from the late 1990s

2021-07-21 Thread Peter Corlett via cctalk
On Wed, Jul 21, 2021 at 10:51:30AM -0700, r.stricklin via cctalk wrote:
[...]
>> Regarding your "IDE HDDs were extremely rare" comment, did *anyone* other
>> than Quantum release an IDE drive in that 5.25" form factor? I can't
>> think of any, everything else was 3.5", although some early vendor's
>> drives were the same height as a "half height" 5.25" drive.
> CDC 94208-51, 62, -75. The -51 is Compaq drive type 17.

We have a winner!

I went looking for more details, expecting to find another FrankenDisk made
from an MFM drive and MFM-ATA bridge board. But compare the photos in these
listings:

https://www.recycledgoods.com/cdc-94205-51-43mb-5-25-hh-mfm-hdd/
https://www.recycledgoods.com/cdc-94208-51-43mb-5-25-half-height-ide-hard-drive-as-is/

The drive itself appears to be exactly the same unit. I don't know the
purpose of the board on the back but they have more or less the same
components in roughly the same positions on both drives so are presumably
just different revisions of the same board. The boards on the bottom are
however clearly completely different and not just because of the IDE versus
MFM connectors.

So this is a true 5.25" IDE disk.



Re: Compaq Deskpro boards/hard drives from the late 1990s

2021-07-21 Thread Peter Corlett via cctalk
On Wed, Jul 21, 2021 at 06:48:08AM -0500, Jules Richardson via cctalk wrote:
[...]
> Regarding your "IDE HDDs were extremely rare" comment, did *anyone* other
> than Quantum release an IDE drive in that 5.25" form factor? I can't think
> of any, everything else was 3.5", although some early vendor's drives were
> the same height as a "half height" 5.25" drive.

Not quite answering the question you asked, but optical drives from 15-25
years ago are 5.25" IDE devices.

Miniscribe also shipped MFM disks with an ATA adaptor board -- okay, IDE is
exactly that, but these were visibly discrete components -- although I can't
find an example of such a contraption which shipped with a 5.25" MFM disk.
It's likely that the adaptor board would "work" when transplanted onto a
5.25" disk, although getting it to properly handle the different geometry
may be challenging.

Given my failure to find a proper counterexample (not that I seriously
expected to find one), I agree that the Quantum Bigfoot was almost certainly
one of a kind. It was a niche product which only had a few years of
viability, and given that apart from capacity, Bigfoots (Bigfeet?) were
terrible drives, we'd have paid close attention to any competing drives and
would still remember them.



Re: VT340 Emulation

2021-06-26 Thread Peter Corlett via cctalk
On Fri, Jun 25, 2021 at 04:53:08PM -0600, Grant Taylor via cctalk wrote:
> On 6/25/21 2:48 AM, Peter Corlett via cctalk wrote:
[...]
>> The other is in the software layer: the standards are a mess and the
>> full gamut of serial protocols are not available and/or not implemented
>> properly.
> I can't tell if that's a USB specification problem or a problem with what
> people have executed / built (thus far).

A bit of both. The USB communications device class (CDC) is designed for
modems, and it can be hit-and-miss trying to speak to something else. Of
course, that doesn't prevent one from ignoring that standard and just
creating a bespoke USB device which happens to produce RS232, and FTDI do
just that. FTDI's devices are better than standard CDC, but that's not a
terribly high bar.

> From my naive point of view, I wonder if it would be possible to build
> some sort of USB device that has a traditional UART that has supporting
> circuitry to connect to the host over USB. -- I say this because it sounds
> like many ~> most ~> all (?) USB to RS-232 converters are doing something
> inferior.

Well, these things will contain a UART of some form, because how else could
they work? For all I know they may even incorporate an actual 16550 IP core
in the design and talk to that from its firmware, although a UART uses
bugger all gates by modern standards and you can just get an intern to
design one in a lunchbreak rather than pay for an IP core.

It doesn't seem impossible to build a decent USB-serial dongle which caters
to all of the weird and wonderful edge cases, but the market for such things
is small enough that they would be quite expensive to produce, reducing the
market further.

>> The physical connector and pinout is an irrelevance in comparison. I own
>> a soldering iron.
> LOL (literally) I love your sentiment there. I quite agree with it.

Something I'm putting off is installing a USB micro-B plug on my old iPod
whose 30 pin socket has finally given up the ghost. I could just buy a new
one, except they don't make them any more: the current device called an
"iPod" is a glorified advertising hoarding which has a usability disaster of
a media player bolted on as an afterthought.



Re: VT340 Emulation

2021-06-25 Thread Peter Corlett via cctalk
On Thu, Jun 24, 2021 at 06:46:41PM -0600, Grant Taylor via cctalk wrote:
[...]
> The 4k monitors that I've worked with have been ultra high DPI. This means
> that things that don't have DPI settings end up being tiny on the screen.

It works fine on MacOS, except for various garbage ports from Windows
(Audacity is the one which comes to mind first) using "cross-platform"
toolkits which ignore or misuse the native APIs. It's somewhat more
hit-and-miss on Linux because it relies a lot more on those toolkits a lot
more for GUI application software as there aren't so many native
alternatives. But hey, it's still less broken than Windows.

Windows is a curiously-heavyweight bootloader for Steam. It serves no other
useful purpose.

> It's especially a problem if you try to mix non-4k and 4k monitors.

Disregarding the aforementioned software deliberately trying to subvert it,
this again just works on the Mac: drag a window from one monitor to the
other and the window contents will change its DPI to match.

>> OK, sorry. "Real" is for me here, physically the same connectors like
>> DB25/DB9/MMJ/etc ...
> So how does that differ than a USB-to-RS-232 with the proper passive
> adapter to go from DB-9 to DB-25 / MMJ.

USB-serial dongles tend to be a wretched experience for a couple of reasons.
The first is at the electrical layer: USB only has 5V available and
generating RICH CHUNKY VOLTS in such a small dongle is difficult and
expensive, so doesn't happen, and the voltage swing might not be wide enough
for older devices. The other is in the software layer: the standards are a
mess and the full gamut of serial protocols are not available and/or not
implemented properly.

The physical connector and pinout is an irrelevance in comparison. I own a
soldering iron.



Re: On compiling. (Was a way off topic subject)

2021-06-23 Thread Peter Corlett via cctalk
On Wed, Jun 23, 2021 at 11:42:22AM -0700, Van Snyder via cctalk wrote:
[...]
> I have a vague recollection of a story about a FORTH processor that put
> the addresses of the functions to be executed on the return-address stack
> (68000?) and then executed a RETURN instruction.

I was initially going to say that doesn't sound right because m68k's JMP
instruction supports all useful EA modes and a PEA/RTS combination takes two
extra bytes and is slower than a plain JMP. But pushing *many* return
addresses is more plausible because each function will then magically call
each other in turn. I'm still not entirely convinced it'd be enough of a win
(if any) over a conventional run of JSR instructions. Perhaps it actually
misused RTM, which I never quite understood because Motorola's documentation
on modules is rather opaque and it's only available on the 68020 onwards.

This wheeze works on x86 too--and of course most other CPUs--but it can make
mincemeat of performance on (some) modern CPUs because caches assume that
CALL and RET are paired.

ROP (https://en.wikipedia.org/wiki/Return-oriented_programming) is an
interesting application of this technique, usually for nefarious purposes.



Re: Early Programming Books

2021-06-21 Thread Peter Corlett via cctalk
On Sun, Jun 20, 2021 at 08:06:26PM -0600, ben via cctalk wrote:
[...]
> My latest gripe, is I still am looking for a algorithm to generate code
> for a single accumulator machine for an arithmetic expression. Parenthesis
> need to evaluated first and temporary variables allotted, thus a two pass
> algorithm. Everything is single pass. Recursive decent can parse but can't
> generate 'correct' code. A-(B+C) is LD B ADD C ST T1 LD A SUB T1, not LD A
> ST T1 LD B ST T2 LD C ADD T2 NEGATE ADD T1

TL;DR: you're probably setting so many implicit design constrants that such
an algorithm cannot exist.

That accumulator machine sounds a bit like the 6502, for which there are
plenty of BASIC interpreters which can parse and evaluate that expression.
BBC BASIC is my favourite, but Apple's or Microsoft's implementation also do
the job. The ones I've looked at were recursive-descent. Other more advanced
parsers take too much precious memory.

The modern hotness for parsing is parser combinators, but they're really
just a fancy way to write recursive-descent parsers which are easier to read
and reason about. They certainly can parse something as simple as "A-(B+C)",
since recursive-descent can.

Code-generation is a whole different can of worms and unlike the
well-trodden path of parsing, is still not a solved problem in computer
science. All of the compiler textbooks I've checked, including the infamous
Dragon book, handwave code generation, introducing the tree-matching
algorithm which works well for straight-line code on their made-up
educational CPU architectures, but leaves a lot on the table with real-world
CPUs. This limitation probably doesn't matter for your CPU.

It seems that you might want to produce code directly from parsing. This
shortcut can work if you're producing code for a stack machine such as
FORTH, but not a register machine (unless you use your registers as a
stack). A common wheeze on the 6502 is to implement a virtual stack machine
interpreter, and then the compiler targets that.

To do code-generation "properly", you need a multi-pass algorithm: parse the
source into a tree, optionally do tree-to-tree transformations to do type
checks and/or apply optimisations, and then apply the tree-matching
algorithm to that to produce machine code. This is all well-described in any
good compiler textbook.

Note that except for trivial expressions, you'll need a stack and/or other
registers to squirrel away intemediate results. Consider "(A*B)-(C*D)": you
need to save the result of one of the multiplications before doing the other
and then the subtraction.

In the case of the tree-matching algorithm, if your CPU (or rather, the
model given to the tree-matcher) is too limited to evaluate a given
expression, it will definitively fail and give you a subtree that it cannot
match. You then have the "joy" of fixing the model, possibly by adding
virtual instructions which are really library calls. For a simple CPU, such
calls might account for most of the generated code, at which point a virtual
machine becomes a good idea.

The book "Writing interactive compilers and interpreters" may be more
suitable if you're looking for quick hacks to get up and running on a small
machine, assuming you can lay your hands on a copy.



Re: Hard To Believe This Person Is Serious

2021-03-26 Thread Peter Corlett via cctalk
On Fri, Mar 26, 2021 at 09:02:20AM +0100, Christian Corti via cctalk wrote:
[...]
> Why is the price marked in GBP and why doesn't he ship to Germany?

Assuming anything gets shipped at all. Perhaps they don't want to take money
from anybody too local who might cause them some grief.

I note they ask £260.00 for "Economy Delivery (Economy Int'l Postage)" to
the Netherlands "between Wed. 31 Mar. and Fri. 16 Apr". That's quite the
markup on a bit of bubblewrap, a Jiffy bag, and a €7 stamp.



Re: 80286 Protected Mode Test

2021-03-14 Thread Peter Corlett via cctalk
On Sun, Mar 14, 2021 at 04:32:20PM +0100, Maciej W. Rozycki via cctalk wrote:
> On Sun, 7 Mar 2021, Noel Chiappa via cctalk wrote:
>>> The 286 can exit protected mode with the LOADALL instruction.
[...]
> The existence of LOADALL (used for in-circuit emulation, a predecessor
> technique to modern JTAG debugging and the instruction the modern x86 RSM
> instruction grew from) in the 80286 wasn't public information for a very
> long time, and you won't find it in public Intel 80286 CPU documentation
> even today. Even if IBM engineers knew of its existence at the time the
> PC/AT was being designed, surely they have decided not to rely in their
> design on something not guaranteed by the CPU manufacturer to exist.

The Wikipedia page on LOADALL claims "The 80286 LOADALL instruction can not
be used to switch from protected back to real mode (it can't clear the PE
bit in the MSW). However, use of the LOADALL instruction can avoid the need
to switch to protected mode altogether."

I find that paragraph very persuasive. The author knows about LOADALL and
the desire to use it to avoid going into protected mode, and also explains
that there's a specific exception in its behaviour which prevents returning
to real mode. All of the other hacky uses of LOADALL would be unnecessary if
it could be used to switch modes at will. It just doesn't seem like
something that would be written if it was wrong.

Is Wikipedia incorrect and the 286 LOADALL *can* exit protected mode, and if
so, how?



Re: Any interest in a Floating Point Systems AP-120 array processor?

2021-03-02 Thread Peter Corlett via cctalk
On Tue, Mar 02, 2021 at 01:06:24PM +0100, Peter Corlett via cctalk wrote:
[...]
> I'll say. Modern kit gets 1 FLOPS per MHz per core [...]

And indeed with the speed of modern machines with clock speeds in the GHz
and TFLOPS, and thousands of cores in some devices, we use large SI
multipliers so routinely we kind of forget they're there at all, and I made
a typo. It should of course be should be "1 FLOPS per Hz per core", or "1
MFLOPS per MHz per core".



Re: Any interest in a Floating Point Systems AP-120 array processor?

2021-03-02 Thread Peter Corlett via cctalk
On Mon, Mar 01, 2021 at 10:40:41PM -0800, Boris Gimbarzevsky via cctalk wrote:
[...]
> Out of curiousity, decided to benchmark one of my old, really cheap PC
> laptops that got in 2010 and it managed 30 Mflops using double precision
> arithmetic. 10 Mflop performance no longer as impressive as it used to be.

I'll say. Modern kit gets 1 FLOPS per MHz per core, give or take an order of
magnitude depending on the specific architecture. That machine must have
been appallingly bad to only manage 30 MFLOPS. Although perhaps you meant
GFLOPS, in which case it sounds about right.

The Haswell CPU in my 2014-vintage laptop manages "up to" 147 GFLOPS. Which
is an order of magnitude slower than its GPU. Useful FLOPS for scientific
computing rather than contrived numbers for benchmarking may well lose an
order of magnitude or two in overhead, but it's still not hanging about.



Re: Greaseweazle

2021-02-03 Thread Peter Corlett via cctalk
On Wed, Feb 03, 2021 at 01:09:50AM -0800, jim stephens via cctalk wrote:
> On 2/2/2021 11:51 PM, Peter Corlett via cctalk wrote:
>> The Raspberry Pi Pico has a similar price to the Blue Pill and seems a
>> much better machine for this task, although I haven't combed through its
>> reference manual yet.
> For capture and writing (if that's part of the design) I heard there's a
> dedicated coprocessor for the GPIO pins. It might be useful for offloading
> some of the proccessing from some external circuitry to do the capture or
> output.

I have now pulled the reference manual to look at the GPIO stuff, and it is
indeed very shiny. There's only space for 32 coprocessor instructions per
GPIO bank, but that's possibly all you need: it is apparently possible to
implement a full UART with handshaking in that.

Controlling a floppy is broadly the same level of complexity as a UART, so
it seems that the Pico would be he perfect tool for the job. Now if only I
could actually lay my hands on one...

> I don't know what's included for the capabilities. And apparently since
> the chip is new, there's only assembler programming for it now.

It's a Cortex M0, so any compiler which can produce freestanding ARM code
can generate code for it. So that's gcc, clang, rustc, etc. I suspect I can
just use my existing embedded tooling after telling it to use a different
subarchitecture.

> There's a video comparing the Pico, ESP32, ESP32-S and Blue Pill. The latter
> was a bit low in resources compared to the others.

OTOH, I can actually buy a Blue Pill today from reasonably reputable local
suppliers such as Amazon or AZ-Delivery, and they only cost €15 for five
including delivery. The Pico will be a game-changer if/when there's actual
stock here on the Continent. Importing directly from the UK is no longer a
sensible proposition.

> I'll try to find it and post it if anyone's interested.

No rush, especially as I only suffer video as a last resort when information
is unavailable in any other form.



Re: APL\360

2021-02-03 Thread Peter Corlett via cctalk
On Tue, Feb 02, 2021 at 08:50:56PM -0700, ben via cctalk wrote:
> On 2/1/2021 6:07 AM, Peter Corlett via cctalk wrote:
[...]
>> You're describing a failing in C and similar languages stuck in the
>> 1960s. Here's a Rust method that does add-exposing-carry:

>> https://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_add
> So why could this not have been done 30 earlier?

(I assume you mean "30 years earlier" here.)

Two reasons, one of which I was already aware of back then in 1991, and one
which is only obvious in hindsight.

The first is simple toxic masculinity. It's more manly to write stuff in
machine code (then) or C (now), apparently.

The second is a combination of Moore's Law and computer science. To handwave
wildly, Rust is what you get when you look at C++'s mistakes of the last
forty years and start again. If you've used a C++ compiler from the 1980s or
early 1990s, you'll have found the experience harrowing: computers were
still too slow and too small to do a good job, and various important
language design and code-generation techniques were not yet known and/or
were too impractical to implement. It all improved rapidly in the 1990s and
2000s, bringing us more or less to the top of the sigmoid curve.

u32::overflowing_add() returns a (u32, bool), i.e. a two-member struct.
Those early compilers cannot return structs directly, so the caller would
have to reserve space (on the stack, probably) and pass the address for the
function to fill in. That simple add-producing-carry has just become a
function with a half-dozen instructions, several of which are needed to
check the carry flag and convert it into a integer, plus many memory
accesses, just to provide a C wrapper around a simple add instruction. At
that point it does make sense to toss the compiler and write assembly
language.

Modern compilers do a lot of heavy inlining as a matter of course, will
split structs and perform dataflow analysis on individual elements, and
generally avoid memory access unless they absolutely have to.
u32::overflowing_add() should turn into a single add instruction and the
carry flag will probably be tested directly by a branch instruction and
never get converted to a boolean variable. It'll be as good as if not better
than hand-written assembler.

This is a particularly trivial function as well. Rust just wouldn't be
viable with 30 year old compiler technology.

>> The documentation doesn't explicitly say "carry" because Rust is
>> architecture-neutral and it's down to LLVM to decide how to express it in
>> machine code, but on x86 (and probably ARM) the boolean return value
>> comes directly from the carry flag.
> mincing words, sigh.

RISC-V doesn't have a carry flag. It handles overflow by e.g. using the BLT
instruction to branch if the result is smaller than the number being added
to. So documentation which assumes the world is x86 will not make sense here.

[...]
>> You "don't believe in objects" yet then describe a problem which only
>> exists due to the lack of them and then present OO pseudocode to solve
>> it. A lot of OO languages suck of course, but the fundamental idea of
>> encapsulation is not the bit that sucks.

> Are objects? they only way to solve this problem. I see a object as data
> structure tied to some code. What I would like to see data structures
> having fixed constants like array start and end of a structure for a array
> as variables when called into use.

There's no real difference between a "constant ... when called into use" and
a pure function which returns a value based on the structure elements.

>> Here's it in Rust, where it takes in an arbitrary
>> array (pedantically, "slice", a (pointer, element count)-tuple) and
>> determines its length at runtime:
>> 
>> pub fn clear_indexed(array:  [usize]) {
>>for index in 0 .. array.len() {
>>  array[index] = 0;
>>}
>> }

> I don't want Information hiding, I want to know what it does clearly. If I
> can't figure it out how can a computer program do it. Where does ".len()"
> find the size?

That is an implementation detail which you don't need to know to be able to
write Rust code. However, the len() method actually just returns the element
count from the (pointer, element count)-tuple. The function is so trivial
that it is guaranteed to be inlined so it's exactly the same as if you had
accessed the field directly.

Before you ask, no, you can't access the field directly for all of the
excellent reasons explained by every "introduction to OOP" tutorial which I
don't need to repeat.

[...]
>> pub fn clear_iterator(array:  [usize]) {
>>for elem in array {
>>  *elem = 0;
>>}
>> }

>> Both code fragments generate equivalent assembly in t

Re: Greaseweazle

2021-02-02 Thread Peter Corlett via cctalk
On Tue, Feb 02, 2021 at 09:20:01AM -0800, Chuck Guzis via cctalk wrote:
[...]
> When I last proposed the STM32F407, I was met with "Oh, but the Blue Pill
> is cheaper". Okay, use the Blue Pill, but my code won't work with it. Not
> once has anyone contacted me and said "I'd like to try my hand at doing
> this, what can you tell me?". I've described the methodology of using an
> MCU elsewhere several times.

I have a pile of Blue Pill boards, and using it to read floppies was an
obvious application. However after running the numbers, it turned out there
isn't enough RAM to buffer an entire track from a HD floppy. It also has a
broken USB implementation just to liven things up a bit.

The Raspberry Pi Pico has a similar price to the Blue Pill and seems a much
better machine for this task, although I haven't combed through its
reference manual yet.



Re: APL\360

2021-02-02 Thread Peter Corlett via cctalk
On Mon, Feb 01, 2021 at 08:15:25PM +0100, Liam Proven via cctalk wrote:
> On Mon, 1 Feb 2021 at 20:00, Fred Cisin via cctalk
>  wrote:
>> I had always been told, "A pint is a pound, the world around."

"The world" meaning "the USA", of course.

> Aha! Does that mean a pint of water weighs 1lb?

Yes, to within the massive margins of error involved in prehistoric units.

A fluid ounce of fluid weighs an ounce, oddly enough. This equivalence
applies to both British and American units. Although it seems obvious that
this is how it's defined, it turns out that it's actually derived from the
gallon, because rationality goes out of the window when it comes to this
trainwreck of a system.

There are 16floz to an American pint and 16oz to a pound, which lines up
nicely and gives that rhyme. A British pint is 20floz whereas the pound
remains 16oz, hence the alternative rhyme "a pint of water weighs a pound
and a quarter".

As a bonus, different reference fluids are used, so the floz also differs by
roughly 4% as it crosses the Pond.

The Dutch haven't quite let go of "ons", "pond" and "pijnt". The latter two
have been fudged to 500g and 500ml. I think "ons" might just be used in
stock phrases now. Some products ae also dual-labelled and include a floz
conversion; I did the maths because the number looked a bit off and it
turned out to be American floz.

> Interesting. I did not know.
>> I had already assumed that pub prices had inflated to higher than a pound.

The rhyme refers to weight, not currency...

The last pint of Real Ale I had before using my one-way Eurostar ticket the
hell out of the place was £4.20. Or four guineas if we're harking back to
the era of that rhyme.

> It was under £1 for ½litre of beer when I got here. In fact it was under
> US$1/ US 1pt. Now it's a bit more.

I did enjoy lining them up at €1.40/500ml in Bratislava, and in a tourist
trap at that. At some point I should spend more than six hours in Eastern
Europe, and a smaller proportion in a pub. Slovakia looked worth a visit
outside of the touristy bits, but my companion was being grumpy.

> Cheapest I had was CzK 17 for half a litre. At the time that was about 50¢.

It was €4.40 for a Happy Hour pijnt when I was last in one of my favourite
Amsterdam boozers, which was rather a long time ago now thanks to Rutte's
general incompetence at running the country. At today's exchange rate, the
€9.49 crate of beer from my local supermarket works out at €0.60/500ml.

>> Such worries call for having a few pints.
> It is one of the things I miss most in lockdown. And there's no
> electricity supply in my man-cave/basement so I can't even go down there
> and play with my old computers. :-(

Eh, at least the beer will be at cellar temperature.



Re: APL\360

2021-02-01 Thread Peter Corlett via cctalk
On Fri, Jan 29, 2021 at 01:12:55PM -0800, Chuck Guzis via cctalk wrote:
[...]
> Most old (pre S/360) digit/character-addressable architectures were
> big-endian (i.e. higher-order characters occupied lower addresses)
> Even PDP-11 isn't strictly little-endian, though Intel X86 definitely is.

I note that modern x86 and ARM have big-endian load and store operations, so
while both architectures are little-endian by default, there is no extra
overhead for handling big-endian data.

Little-endian tends to be more useful when doing multi-word arithmetic.
Big-endian is handy for text and human-readable numbers. That there are
heated arguments over which endianness is best mainly tells us that there's
bugger all in it either way. After all, the word "endian" is a satirical
device in Gulliver's Travels.

> Numbering of bits in a word is also interesting. Is the high order bit in
> a 64 bit word, bit 0 or bit 63? Both conventions have been employed.

On every CPU I've used, LSB has always been bit 0. Unlike endianness, this
is clearly better than the other way round since the value is 2**bit_number
and the bit number doesn't change if the value is converted into a different
word width.

When it comes to I/O devices which don't do arithmetic, either convention
may appear. Hardware people rarely pick names or conventions that make sense
to software people.



Re: APL\360

2021-02-01 Thread Peter Corlett via cctalk
On Wed, Jan 20, 2021 at 02:05:37PM -0700, ben via cctalk wrote:
[...]
> I don't see languages in general have improved since the the mid
> 1960's. Hardware and language models don't reflect each other,
> and don't have extendable data sizes and types.
> PL/I seems to have been the best,but too tied to IBM.
> C standard 2131 complex numbers
> C standard 2143 dubble complex numbers
> Every machine I can think of had a carry flag of some type
> yet no language uses that to extend it self.

You're describing a failing in C and similar languages stuck in the 1960s.
Here's a Rust method that does add-exposing-carry:

https://doc.rust-lang.org/nightly/std/primitive.u32.html#method.overflowing_add

The documentation doesn't explicitly say "carry" because Rust is
architecture-neutral and it's down to LLVM to decide how to express it in
machine code, but on x86 (and probably ARM) the boolean return value comes
directly from the carry flag.

> I don't believe in objects because data structures don't have classes, but
> are more similar to each other. A window A structure is like window B but
> details are different. That makes things look portable when they are not.

> Constants still
> seem to be embedded in data structures, rather than abstract.
> -- zero array
> define LIMIT abc
> blah array[LIMIT]
> ...
> i = 0 while i< LIMIT array[i] = 0 i = i + 1 endw
> I would like say
> let LIMIT = abc
> blah array[LIMIT]
> i = 0 while i< array:LIMIT array[i] = 0 i = i + 1 endw

You "don't believe in objects" yet then describe a problem which only exists
due to the lack of them and then present OO pseudocode to solve it. A lot of
OO languages suck of course, but the fundamental idea of encapsulation is
not the bit that sucks. Here's it in Rust, where it takes in an arbitrary
array (pedantically, "slice", a (pointer, element count)-tuple) and
determines its length at runtime:

pub fn clear_indexed(array:  [usize]) {
  for index in 0 .. array.len() {
array[index] = 0;
  }
}

(The code for C-style fixed-length arrays looks broadly similar but has a
bit more boilerplate because they're less useful.)

Iterating over indices is generally discouraged for a number of reasons, not
least being that the index may be out-of-bounds, but also because it can
inhibit vectorisation or parallelisation. You have no choice in broken
languages such as C, but many languages provide some notion of iteration
which guarantees to not go out-of-bounds:

pub fn clear_iterator(array:  [usize]) {
  for elem in array {
*elem = 0;
  }
}

Both code fragments generate equivalent assembly in this trivial example
because the Rust compiler could prove at compile time that the index
variable can't go out-of-bounds. In more complex real-world code it cannot
reliably do so and will insert a run-time check which aborts if the index is
out-of-bounds. Or if it's C, trundle on and corrupt things.

Oddly enough, the state of the art has moved on a bit in the half-century
since C was invented. It's just that quite a lot of programmers haven't yet
noticed.



Re: APL\360

2021-01-29 Thread Peter Corlett via cctalk
On Fri, Jan 15, 2021 at 11:21:11AM -0500, Nemo Nusquam via cctalk wrote:
[...]
> In 1999, a fellow student in a UML course worked for a large information
> company (Reuters, I think?) and told me that they had embarked on an
> expensive s/w conversion project. Their back-end systems were implemented
> in APL and they could not find programmers -- even ones willing to learn
> APL for pay.

I'd have taken it if the price was right. Heck, I seem to have spent most of
my career wading through fossilised Perl, so it's not as if I'd notice much
difference.

Companies moaning that they can't find staff inevitably turn out to be
offering less than the going rate for what they're demanding and/or the
company is toxic. Usually both.



Re: APL\360

2021-01-29 Thread Peter Corlett via cctalk
On Thu, Jan 14, 2021 at 07:43:13PM -0800, Chuck Guzis via cctalk wrote:
[...]
> APL was difficult for those used to traditional programming languages, not
> primarily because of the character set, but because it's basically a
> vector/matrix programming language.

It is *also* the use of symbols. Firstly, some people are just symbol-blind
and prefer stuff spelled out in words. It's just how brains are wired.
Secondly, beyond BODMAS, the meaning and precedence of random symbols is
unclear to casual readers. Calls to named functions tend to be more
descriptive -- "map" and "filter" mean the same sort of thing across
functional(-inspired) languages -- and the precedence is obvious thanks to
the parentheses.

At a guess, part of the reason APL does this is so that the programmer
pre-tokenises the input to make it easier for the language to process.
Sinclair BASIC did this too, to much wailing and gnashing of teeth. It may
have even been inspired to do this by APL given the manual says Sinclair
BASIC was written by a "Cambridge mathematician".

The "modern" vector/matrix programming language most commonly used by
contemporary programmers use is probably SQL. It's amazing how many people
use it inefficiently as a key-value store when it's really a
matrix-transformation language even though it *looks* like an imperative
language. The 1970s has a lot to answer for.

> It's a different world from BASIC, for sure.

Yes, well, a lot of BASIC programmers have even more fundamental problems
with understanding important programming concepts, such as recursion and
pointers/references. Without those, a lot of important algorithms cannot be
implemented, or even understood.

> Neil maintained that its strength lay in thinking about things in a
> non-scalar way. I'll give him that--programming on STAR, where a scalar
> was treated by the hardware as a vector of length 1 (and thus very slow
> because of startup overhead) certainly led you toward thinking about
> things in vector operations, just like APL.

Modern x86-64 (and ARM etc) also (finally!) has useful vector instructions.
Unfortunately, the popular languages do not make their use terribly simple,
and mostly rely on compilers recognising idiomatic loop patterns over
scalars and transforming them. This works about as well as you might expect.



Re: Emails going to spam folder in gmail

2021-01-01 Thread Peter Corlett via cctalk
On Thu, Dec 31, 2020 at 07:43:12PM -0800, Michael Brutman via cctalk wrote:
> Disclaimer: I don't speak for Google ...

> The thread shows a lot of Google bashing. Insinuating that Google makes it
> difficult so that people follow the path of least resistance is part of
> that.

I didn't insinuate it: I said it out loud. Whether it's an explicit decision
by Google or just an emergent effect is moot. Google are clearly quite happy
with this state of affairs otherwise they'd do something about it.

[...]
> It took me less than a minute of searching to find this:
> https://support.google.com/mail/contact/bulk_send_new

It is not exactly difficult to find forms to fill in at Google. They *love*
their forms. All attempts at getting support are deflected by a request to
fill in a form. Nothing useful happens upon filling in a Google support
form, of course. An auto-ack at best, then tumbleweed.

Franz Kafka could have written a whole shelf of novels about Google.



Re: Emails going to spam folder in gmail

2020-12-30 Thread Peter Corlett via cctalk
On Wed, Dec 30, 2020 at 10:13:40AM -0500, Bill Degnan via cctalk wrote:
[...]
> Attempting to pull in this thread a tad, there are relatively simple
> measures that can be taken to bring a private mail server into compliance
> with gmail, Amazon, Microsoft level mail server protocol and
> authentication.

You have failed to explain why I should make any effort at all to jump
through random hoops set up by FAANG which seem to change on a weekly basis
and where doing so offers no guarantee of success.

> Its not just gmail. The simplest measures are done with DNS and TLS. Most
> of the mail that I see routinely falling into spam folder is from what
> appears to be spoofed domains. Many of these are legit messages

... so therefore they are not actually spoofed.

> [...] that dont have a properly configured DNS record,

I already have properly-configured DNS for mail: an MX record.

> preventing the receiving server from authenticating the FROM domain as
> owned by the sender.

SMTP is an unauthenticated protocol. Further, the futile attempts to bodge
authentication on to it with the likes of SPF and DKIM do not actually help
at all with spam. Until I just added them to my blacklist of pink providers
whose mail is unconditionally rejected, Google was quite happy to unleash a
firehose of spam at my server, all nicely DKIM-signed to tell me it came
from Google like I couldn't have already figured that out from the IP
address.

> A simple fix.

So, what simple fix is this?

SPF is extremely broken by design. The only useful configuration is a short
PASS list of valid-sender IP addresses and a FAIL of everything else (e.g.
"v=spf1 ip4:10.20.30.40 a -all"). This requires ensuring that you can
chokepoint all mail through those hosts, which is not always easy to
arrange.

DKIM attempts to "fix" SPF by adding cryptography, thus adding rather a lot
of extra complexity and CPU usage. This means that classic computers can no
longer send email, because they don't have enough grunt to overcome this
artificial barrier. It makes mail rather brittle and tends to break mailing
lists in an even more spectacular manner than SPF. Just to liven things up a
bit, DKIM is also patent-encumbered.

Then there's ARC which attempts to mitigate various deliverability problems
caused by DKIM making mail more brittle. No doubt further layers of gaffer
tape will follow when that breaks something.

And to what end? So the odds of a hypothetical message sent to a GMail user
ending up in their spam folder drops from 99% to 98%? Here's a nickel kid,
get yourself a better mail provider.



Re: Emails going to spam folder in gmail

2020-12-29 Thread Peter Corlett via cctalk
On Mon, Dec 28, 2020 at 05:12:09PM -0500, Bill Degnan via cctalk wrote:
[...]
> For those of you who run your own mail servers please consider updating
> your DNS / authentication to match gmail standards.

Google has more resources than me. How about they update their systems to
match Internet email standards?

[...]
> There are a lot of how-to's on the web, each mail server is different and
> there is no simple fix that applies to all.

Where shall I send the invoices for the time to do this plus the ongoing
maintenance due to Google's embrace, extend, and extinguish approach to
Internet standards?



Re: misc stuff - free

2020-12-16 Thread Peter Corlett via cctalk
On Tue, Dec 15, 2020 at 08:09:04PM +0100, Johan Helsingius via cctalk wrote:
> On 15-12-2020 10:40, Liam Proven via cctalk wrote:
>> It's nothing new. 15y ago or something, there were umpteen Communities on
>> Livejournal for any conceivable subject or interest -- most created by
>> kids without the wits to check for others' before creating their own.
> This is one of the reasons why I miss good old USENET - with a public list
> of groups, and a clear hierarchy.

You know Usenet still exists, right? September finally ended and so the
volume is down, but that is no bad thing.

I run a transit server for my own amusement. (It's even listed in "The
Official TOP1000 Usenet Servers".) I keep toying with asking my peers to
send a full feed instead of just the handful of hierarchies and groups I'm
interested in.



Re: The best hard drives??

2020-11-22 Thread Peter Corlett via cctalk
On Tue, Nov 17, 2020 at 10:34:10AM +0100, mazzinia--- via cctalk wrote:
> Interesting read,

> What is your opinion of the Seagate exos 7e8 units ? (and does SED make any
> difference in ensuring a bit more quality of the platters)

I've not used them, but Exos disks ought to be be just fine. You don't stay in
business by selling junk to enterprise customers.

The Exos 2X14 is a fascinating device due to its dual actuators, and thus has
twice the throughput and IOPS. It's like getting two 7TB disks in a single 3.5"
slot and uses just one SATA/SAS lane so good for high storage density. I
suspect it makes it twice as unreliable though. I also can't find them for sale
anywhere.



Re: The best hard drives??

2020-11-20 Thread Peter Corlett via cctalk
On Tue, Nov 17, 2020 at 09:36:00AM -0500, Bill Gunshannon via cctalk wrote:
[...]
>>> It also turns out that £1 ≈ €1 ≈ $1.
> Close, but no cigar. I just bought something from Europe 3 days ago.

This rule of thumb only applies to stuff imported from the USA to Europe, or
from anywhere to the UK. It also only applies to prices quoted to consumers.

> Exchange rate: $ 1 USD = € 0.8111 EUR

Since the actual rate has been about €0.845 for a few months now, I guess your
bank charges 4% over the mid-market rate. That seems a bit high.

However, as you surely know, the EU applies tarrifs on imports, and individual
countries also charge consumers VAT. Tariffs tend to be fairly nominal or zero
unless there's a trade war going on, whereas VAT varies between 15% and 25%.

So if I import a $100 widget, even if it has a zero tariff, I still get to pay
21% Dutch VAT which brings it to $121, or €102. Therefore $1 ≈ €1.

That same $100 widget imported into the UK becomes $120 due to the 20% VAT
rate, which comes to £90, but the UK has its own self-inflicted problems which
cause importers loads of extra costs and £100 is easily believable.

If I take off my consumer hat and put on my businessman hat, I can import stuff
without paying VAT and then it is just the $100, €84.50, or £75 suggested by
the exchange rate.



Re: The best hard drives??

2020-11-20 Thread Peter Corlett via cctalk
On Tue, Nov 17, 2020 at 02:54:27PM +0100, Liam Proven via cctalk wrote:
> Peter Corlett via cctalk  wrote:
>> Five MyBooks bought 18 months ago had debranded He8 disks in there: very 
>> nice.
>> The three Elements a few months back have (non-SMR) WD Reds in them, which is
>> OK. Three more are supposedly turning up tomorrow.
> Oh blast, I wish I had known then...

I've now had time to spin up the server and query all six of these Elements.
They are model number WD80EDAZ-11TA3A0, i.e. 7200RPM air-filled Reds. Since
you'd normally have to get a Red Pro to guarantee that spec, and those are
currently around €260, I'm obviously quite chuffed that I got these for
€115-€135 a pop even if they're not the holy grail of He8s.

I'm getting 1.03GB/s (or 986MiB/s if you have 10.07 fingers) across the six
spindles from copying some random data onto the zpool and then doing a scrub:

  pool: test
 state: ONLINE
  scan: scrub in progress since Fri Nov 20 22:43:28 2020
121G scanned at 1.92G/s, 60.7G issued at 986M/s, 121G total
0B repaired, 50.23% done, 0 days 00:01:02 to go

That's faster than the SSDs used for the boot volume.

>> It also turns out that £1 ≈ €1 ≈ $1.
> Indeed so. Sadly, most Merkins don't know this and wail about not
> understanding Weird Forrin Money.

What's further distorting prices on some things I import from Germany is that
they have a temporary VAT reduction from 19% to 16% due to the plague, but
unsurprisingly this made no difference to the VAT-inclusive prices offered to
consumers. However, the Dutch VAT rate remained at 21% and since I have to pay
the difference when importing so what's actually happened is that Germany has
caused a 3% price increase.

In some cases its actually 5% because some German sellers used to happily
charge the same VAT-inclusive prices to Dutch and German consumers and eat the
2% difference, but 5% is just too much for them to absorb so they charge the
"proper" price.

Since the Czech Republic also has 21% VAT, do you have this experience too?



Re: Regional accents and dialects (Was: The best hard drives??

2020-11-19 Thread Peter Corlett via cctalk
On Wed, Nov 18, 2020 at 12:20:36PM -0800, Fred Cisin via cctalk wrote:
[...]
>> But yesterday, I discovered that the 'L' in words such as "palm", "balm" and
>> "psalm" is _no longer_ silent and is actively pronounced in some regions of
>> the US, and mere surprise was no longer adequate and I was forced to resort
>> to astonishment.

They're soft but not silent in my accent. But you're from the northwest and all
bets are off when it comes to how the pie-eaters speak. Presumably at least the
"P" in "psalm" is silent, because that really does sound weird if not.

> Nobody around here will use Worcestershire sauce, because they are afraid to
> even try to pronounce it.

Call it "Lea and Perrins" like the rest of us, except in Sheffield where it's
called "Henderson's" for reasons that Yorkshiremen will readily expand upon in
depth, regardless of whether or not you actually wanted to know.



Re: The best hard drives??

2020-11-17 Thread Peter Corlett via cctalk
On Tue, Nov 17, 2020 at 12:37:23AM -0500, Ethan O'Toole via cctalk wrote:
[...]
> HGST. 4TB seem really good.

I have a half-dozen of those in raidz2 on my workstation and can confirm. HGST
disks are good enough that WD bought them, declared them to be so good that
they are clearly Enterprise drives, and doubled the price overnight. Which is
why I stopped at six.

The current wheeze is to "shuck" (remove the internal disk from) externals such
as the WD Elements and MyBook. They'll contain the worst of whatever happens to
be in stock at WD and the units up to 6TB are therefore to be avoided -- in
particular, you will get atrocious DM-SMR disks or other consumer-grade junk --
but 8TB and above will get you something decent.

The downside is that they nobble the firmware slightly to have aggressive
powerdown and other tweaks to suit the intended use cases of external USB
disks. The upside is that the hardware is the same. Well, another downside is
that you have to spend a few minutes carefully cracking open the cases without
breaking the tabs so that they can be reassembled in case the disks need to be
RMAd. We've all got spudgers, right?

(You can also shuck Seagates, of course, but then you end up with a terrible
Seagate. Lacie and Intenso externals will also contain nasty Seagate disks.
Good Seagates exist, but are expensive enough that you might as well get SAS
disks and be done with it. I'm still running a (dwindling) fleet of shucked 2TB
Seagates from a decade ago when they didn't yet suck.)

Five MyBooks bought 18 months ago had debranded He8 disks in there: very nice.
The three Elements a few months back have (non-SMR) WD Reds in them, which is
OK. Three more are supposedly turning up tomorrow.

I'm generally getting 8TB disks for €120-140 each from either amazon.de or
amazon.nl. Sometimes the best prices only appear when they're on backorder and
then they randomly turn up a month or two later after I've forgotten I've
ordered them, but that's fine for my needs. It beats paying €300 full retail
for the same disk just so I can have it sooner.

The shucking landscape does shift over time as shown by me getting "only" Reds
in the last batch instead of He8s previously. If you need a disk in several
years time you should do a bit of research and double-check before taking this
advice lest WD have started doing a DM-SMR line of 8TB disks specially for
these enclosures.

It also turns out that £1 ≈ €1 ≈ $1.



Re: non-shunting jumpers?

2020-10-23 Thread Peter Corlett via cctalk
On Thu, Oct 22, 2020 at 11:30:40AM -0700, brian--- via cctalk wrote:
> Oddball question here: has anyone ever seen a way to cap off or protect
> standard 0.1" pin header jumpers?

I prefer to not leave boards in places where stuff like pin headers can be
damaged. And really, pin headers are the least of your worries: they're easy to
replace and dirt cheap, unlike some of the more delicate and exotic components.

[...]
> Any ideas would be appreciated!

You can use what is colloquially known as a "DuPont connector", which is
designed to mate with pin headers. They're more typically used to run wires to
pin headers -- for example the front panel lights and buttons on a PC -- but
you can skip the wires when you only need a mechanical rather than electrical
connection.

A starter kit with the special crimp tool, header shells, and male and female
pins to fit in the shells costs around €30 from Amazon. Don't do what I did and
try and cut corners by just buying a cheap box of shells and pins and assuming
you can wing it with a regular crimp tool or pliers, because that just results
in a knackered connector and another Amazon order.

The kit will come with a variety of header shells and not just the 1x2 of
jumpers. Given that pin headers are quite gregarious, you'll find the larger
shells useful for capping and/or configuring multiple jumpers in parallel.



Re: Digitizing video frame for printing

2020-09-28 Thread Peter Corlett via cctalk
On Mon, Sep 28, 2020 at 03:12:50PM +0200, Lawrence Wilkinson via cctalk wrote:
> Sorry I accidentally deleted this message from Dag Spicer, so here it is
> for cctalk. Reply to him or the list, not me!

[I'm not going to attempt to clean-up the top-quoted mess; check your archive
if you can't remember what it said.]

I don't know anything about the unit described, but we can make an educated
guess based on the known facts.

A quick search tells me that the printer has 132 columns and can do 330 cps.
Assuming square characters, 99 rows are needed to maintain the 4:3 aspect ratio
of the video image. 99 is convenient for neither computers nor NTSC, so some
nearby round figure is indicated. I'll arbitrarily pick 120, i.e. every other
scanline, since this is a reasonable upper bound.

Taking 132 samples of a video line requires a sample rate of roughly 2MHz. I'm
not sure of the state of the art in 1976 but that feels achievable. 26
greyscale levels only needs a 5-bit ADC, which also sounds doable. 132x120 is
15,840, which is close to 16,384, and given the 4116 was launched in 1975, five
of those would be perfect. This has a certain elegance and given the "instantly
freeze" claim, my money's on this design.

None of the dimensions are convenient powers of two, nor small integer
multiples thereof, so the actual page size and greyscale depth would be tweaked
to make the digital logic simpler. 128x80 (every third scanline) or 120x96
(every second scanline plus a bit of cropping) feel most likely, and perhaps a
depth reduction to 4 bits since who is going to check if it's really 26 levels
rather than merely assume so because it's made out of letters? If I'm
overestimating the abilities of mid-70s digital electronics, halve the
horizonal figures: digitise at 1MHz and print 64 columns (in 80 column mode).
It'll still impress the great unwashed. One may as well make it 64 rows as well
so it fits in a cheaper 1K DRAM.

A tape loop could be sampled a row at a time at the convenience of the digital
hardware. It still has to sample with 500ns precision, but not every 500ns.
Again, my calculations suggest six samples per line is sufficient to feed the
printer at 330cps. However, while this saves on high-speed digital components,
it adds a complex and unreliable analogue device which might not take kindly to
the hostile environment it's placed in, so I doubt it.

Another alternative is the wheeze done with cheap video digitisers on the 8-bit
micros: one sample per scanline, slow-scanning horizontally, and the subject is
told to sit still for the required duration. The one I saw was PAL and output
to a BBC Micro with its 160x256 mode, so it'd need to sample over 160 fields,
or 3.2 seconds. That's not exactly "instantly" but might be close enough to
fool enough people, especially when compared to a traditional photo booth.



Re: Exploring early GUIs

2020-09-22 Thread Peter Corlett via cctalk
On Mon, Sep 21, 2020 at 11:29:14PM -0500, Richard Pope via cctalk wrote:
> The Amiga 1000 with AmigaDos and Workbench was released in late 1985.
> AmigaDos is based on Unix and Workbench is based on X-windows.

Er, no.

The Amiga's operating system is a pre-emptive multitasking microkernel which
uses asynchronous message-passing betwen subsystems, which is not the Unix way
of doing things at all. Unix provide libraries of synchronous procedure calls
which block the caller until the job is done.

Although "AmigaDOS" appears prominently in the terminal as one boots Workbench,
that's only the filesystem and command-line shell. Due to time pressure, they
bought in TRIPOS and filed off the serial number. TRIPOS is a fairly clunky
thing written in BCPL that never sat well with the rest of the system, but it
was quite probably the only DOS they could buy in which worked in a concurrent
environment. TRIPOS is the reason why disks were slow on the Amiga.

The other bit that got reduced from a grander vision was the graphics, which
became blocking libraries rather than device drivers. The window manager ran as
its own thread which gave the illusion of responsiveness.

The "X Window System" (not X-windows or other misnomers) is an ordinary[1] Unix
process which provides low-level primitives for a windowing system. "Workbench"
is just an ordinary AmigaDOS process which provides a file manager. You can
even quit it to save memory, and the rest of the GUI still works. They are not
the same thing or "based" on each other at all.


[1] Well, some implementations are setuid root or have similar elevated
privileges so they can have unfettered access to the bare metal and thus
tantamount to being part of the kernel, but that's basically corner-cutting
by a bunch of cowboys and it is possible to do this sort of thing properly
without introducing a massive security hole.



Re: Looking for an IDE simulator

2020-08-30 Thread Peter Corlett via cctalk
On Sun, Aug 30, 2020 at 11:02:50AM -0500, Jules Richardson via cctalk wrote:
[...]
> I found it next to impossible to find information on what - if any -
> technology a particular SSD uses to extend lifespan; while manufacturers all
> compete on things like capacity and speed, very few of them seem interested
> in telling us how long their product might last.

The warranty duration is a good starting point. If it's the absolute legal
minimum (i.e. two years in the EU) then that tells you all you need to know.



Re: Herbert Schildt C code from books

2020-08-27 Thread Peter Corlett via cctalk
On Thu, Aug 27, 2020 at 07:11:05AM +, Randy Dawson via cctalk wrote:
[...]
> I hope you say no, because I will probably learn more by keying in the code
> in the text, and finding my errors.

The errors in the code will not be yours. You will learn more by throwing
everything written by Herbert Schildt into the recycling and getting some
decent books written by somebody who isn't a Dunning-Kruger case. I occasionaly
hear him referred to as "Herbert Shit". His books have negative value, except
perhaps as firelighters.

Here are a couple of critiques of some of his other works which do not pull
their punches:

http://www.lysator.liu.se/c/schildt.html
https://www.seebs.net/c/c_tcn4e.html



Re: Alto II keyset connector plug identification

2020-08-20 Thread Peter Corlett via cctalk
On Thu, Aug 20, 2020 at 04:38:49PM +1000, Steve Malikoff via cctalk wrote:
[...]
> I had a thought, that if the pin spacing was on par with say a common 15-pin
> VGA male connector I could buy a bunch of dirt cheap Golden Dragon ones, set
> them up in the mill and run a high speed slitting saw diagonally between the
> pins (right though the block and metal surround in one go), and just add a
> plastic spacer to bump it out to the length of the 19-pin. After all there
> are only 6 pins used, and of those, just one (assuming common) that would be
> on the extended bit.

Cutting-down a DA-26 might be a better bet, and they're only pennies more
expensive than the DA-15 VGA connector.



Re: Adventures online

2020-07-24 Thread Peter Corlett via cctalk
On Fri, Jul 24, 2020 at 12:59:51AM +, dwight via cctalk wrote:
> I would think to be a mainframe, it has to have a I/O processor. That is
> about all I can think of.

Contemporary PCs satisfy that description: GPUs are the most visible I/O
processor, and all of the other bus interfaces such as SATA and USB need at
least a microcontroller to speak the relevant protocol.

What is old is new again...



Re: Compaq Smart Array 3200 Controller as a SCSI Controller

2020-07-17 Thread Peter Corlett via cctalk
On Wed, Jul 15, 2020 at 01:09:21PM -0700, Ali via cctalk wrote:
[Hardware RAID controllers]
>> There is no good use case for them in 2020, which is why they're all
>> suddenly quite cheap.
> Why do you say that? Not disagreeing per se but just wondering the reasoning
> behind it.

On the "no good use case" front:

I avoid hardware RAID controllers for a variety of reasons, which mostly boil
down to the use of proprietary firmware and naïve RAID implementations. These
also apply to many software RAID implementations which blindly copied them.

The biggie is that proprietary RAID means proprietary on-disk formats. If the
controller fails, you need to find a replacement which understands the old
on-disk format. Good luck with that. Related is the generally shoddy nature of
firmware, and it's usually hard-to-impossible to e.g. query the SMART status of
individual disks.

The next-biggie is the RAID Write Hole. A traditional RAID implementation will
rewrite data you might consider to be at rest because it shares a stripe with
newly-written data, and on failure can corrupt said at-rest data. This is a
fundamental problem which hardware RAID controllers try to mitigate by having a
battery backup unit to deal with power failures, and can (potentially) also
work independently of an OS which crashed mid-write, but it doesn't really
solve it. What if your power stays out longer than the battery lasts?

Software RAID which implements traditional RAID cannot even apply this
mitigation and this is one of the reasons it has a bad reputation. The obvious
solution is to not implement tradional RAID, which is where ZFS and similar
copy-on-write journalled filesystem-cum-volume-managers come in.

The last bastion of hardware RAID controllers was if one was using a toy
operating system such as Windows where the software RAID options were woeful or
nonexistent, but it now has Storage Spaces.

On the "suddenly quite cheap" front: Plain SAS controllers based on e.g. the
LSI 9207 or 9211 are north of €100, whereas MegaRAID controllers based on the
LSI 9260 have plummeted to €30. The former either supports pass-through mode
out of the box or after reflashing with "IT" firmware, but the latter does not.

ZFS does work atop RAID (of either flavour), but is more robust if it can
manage the raw disks directly. A workaround with hardware RAID cards which
won't do pass-through is to configure them with single-disk RAID0 volumes, but
this is somewhat untidy and still has the problem of proprietary on-disk
formats and general inscrutability.



Re: Compaq Smart Array 3200 Controller as a SCSI Controller

2020-07-16 Thread Peter Corlett via cctalk
On Thu, Jul 16, 2020 at 08:52:16AM -0700, Ali via cctalk wrote:
[...]
> This is an article (for the layman) written in 2010 predicting the lack of
> usability of RAID 6 by 2019:
> www.zdnet.com/article/why-raid-6-stops-working-in-2019/. I found the math in
> it interesting and the conclusions pretty true to my experience.

The author screwed up his maths and also made faulty assumptions.

The article states that "SATA drives are commonly specified with an
unrecoverable read error rate (URE) of 10^14. Which means that once every
200,000,000 sectors, the disk will not be able to read a sector." and then "2
hundred million sectors is about 12 terabytes." It seems he is using a sector
size of 64kiB. Standard SATA disks have 4kiB sectors.

"At that point the RAID reconstruction stops". Maybe on his garbage hardware
RAID controller with 64kiB stripes which chokes on a single-bit error in a
stripe because it's too dumb to figure out which disk is lying. ZFS is somewhat
smarter than that.

> I am wondering if SW RAID is faster in rebuild times by now (using the full
> power of the multi-core processors) vs. a dedicated HW controller (even one
> with dual cores).

Not only is software RAID faster now, but this has been the case for at least
15 years. The necessary calculations are trivially vectorisable and are usually
limited by memory bandwidth. Which is several orders of magnitude faster than a
hard disk.



Re: Compaq Smart Array 3200 Controller as a SCSI Controller

2020-07-15 Thread Peter Corlett via cctalk
On Tue, Jul 14, 2020 at 10:47:11AM -0700, Ali via cctalk wrote:
[...]
> Is there any reason a Smart Array controller can't be used as a simple SCSI
> controller? I.E. No array, just using it to drive a tape library? TIA!

In general, hardware RAID controllers cannot be used as ordinary controllers.
There is no good use case for them in 2020, which is why they're all suddenly
quite cheap. Much cheaper in fact than non-RAID controllers, IME, which irks me
as I'm in the market for a plain SAS HBA for use with ZFS.



Re: About to dump a bunch of Compaq SCSI disk caddies (and disks)

2020-07-09 Thread Peter Corlett via cctalk
On Thu, Jul 09, 2020 at 01:24:11PM +0200, Stefan Skoglund via cctalk wrote:
[...]
> Vikt (tittade pga frågan om diskarna på vad frakten från Nederländerna skulle
> kosta dvs ca 250 SEK) ?

According to Google Translate: "how much to Sweden?"

For Sweden specifically, about €10 or SEK100. For Europe in general, €10-12 for
regular post and another €2-5 for tracked and/or insured delivery. The latter
is advisable given how shambolic international post is right now.



Re: About to dump a bunch of Compaq SCSI disk caddies (and disks)

2020-07-08 Thread Peter Corlett via cctalk
On Mon, Jul 06, 2020 at 03:54:10PM -0600, Grant Taylor via cctalk wrote:
[...]
> If I needed one of those drives, I'd be willing to pay $1 / GB plus shipping
> and handling if they were known to be good. (If I needed them) I would buy
> them sight unseen if you ran SpinRite level 2 on the drives and said they
> passed.

As a guideline, PostNL quote €18.20 (≈USD20.50) for its single grade of packet
post to "world". The 2kg weight limit is good for two or three typical 3.5"
hard disks of around 700g each. It's cheaper to the rest of Europe (and UK) but
the 2kg limit remains for anywhere outside the Netherlands. For more than 2kg,
you're looking at expensive couriers and the price to the USA quickly rockets.

I'm up the road in Zaandam and could collect and test these disks, but I
already have more (3) 9GB disks than I need (0) and don't want any more if they
don't already have homes allocated. I'm not sure I could face dealing with
eBay/Marktplaats/etc timewasters to dispose of them responsibly, even if I were
making a nominal €10 each on them.



Re: On: raising the semantic level of a program

2020-06-28 Thread Peter Corlett via cctalk
On Sun, Jun 28, 2020 at 01:32:02PM -0700, Chuck Guzis via cctalk wrote:
[...]
> Why is byte-granularity in addressing a necessity?

Because C's strings are broken by design and require one to be able to form a
pointer to individual characters.

> It's only an issue if you have instructions that operate directly on byte
> quantities in memory.

One wheeze is to just declare that bytes are the same size as a machine word. C
doesn't require char to be exactly 8 bits, but merely at least 8 bits. However,
a lot of C code will break if char, short, int and long aren't exactly the same
size as they are on x86. Mind you, a lot of it is still broken even if they
are...



Re: On: raising the semantic level of a program

2020-06-28 Thread Peter Corlett via cctalk
On Sat, Jun 27, 2020 at 07:15:25PM -0600, ben via cctalk wrote:
[...]
> At what point do variable names end being comments? There needs to be more
> work on proper documenting and writing programs and modules.

What, auto-generated "documentation" which just lists function names and type
signatures is not useful? This is news to pretty much every Java project I've
had the misfortune of interacting with.

> I am not a fan of objects and operator overloading because I never know just
> what the program is doing. apples + oranges gives me what ? count of fruits,
> liters of fruit punch, a error?

That does of course depend on the strictness of the language's type system and
whether the developer has exercised good taste and discretion when using
operator overloading in their API. I would normally expect the compiler to
reject attempts to add two incompatible types, but this is often a triumph of
hope over experience. (But avoid PHP, JavaScript, and similar junk languages
hacked together in a Coke-fuelled bender to solve the immediate problem, and
you're 90% of the way there.)

> It would be nice if one could define a new language for problem solving and
> run it through compiler-compiler processor for interesting problems.

I'm unclear on what you're trying to say here.

Source-to-source translators are of course a well-trodden path, such as early
C++ "compilers" which emitted C. A weaker variant is to abuse operator
overloading to create a minilanguage that is directly compilable without
translation. Such corner-cutting techniques are useful for prototyping new
ideas, but tend to cause more trouble than they are worth if used as-is in
production.

My day job currently involves PureScript, a Haskell-inspired language which is
translated into JavaScript. It is quite an experience.



Re: Future of cctalk/cctech

2020-06-19 Thread Peter Corlett via cctalk
On Thu, Jun 18, 2020 at 10:28:05PM -0400, Tony Aiuto via cctalk wrote:
[...]
> And sometimes, a picture really is worth 1000 words.

But pictures also consume magnitudes-of-order more resources than a thousand
words, and should be used rather more judiciously than they are.

> A tiny SVG diagram in the middle of a description can do wonders. Did your
> physics textbook pull all the diagrams out to an appendix, just leaving a
> reference in the text? No it didn't. That would have been inconvenient and
> unnecessary. Except for those who choose otherwise, we all have the
> capability to view mail that presents like any other printed matter.

My physics textbooks had editors who ensured that the text made sense and the
images were useful to the reader. I'm sorry if you went to a bad school where
your physics textbooks were similar to the vast majority of email.

> It's time to adopt a platform that can handle modern mail. Some may still
> choose a degraded experience, but everyone is entitled to their own fetish.

Any old mail client can read "modern mail": MIME is designed to be
backwards-compatible and the text parts readable on non-MIME clients. One
quickly learns the ASCII renderings of important non-ASCII characters after
using such a client for a while. (How do I know this? I still use trn, which
doesn't understand character sets at all. There are *no* "modern" newsreaders,
apart from the occasional kitchen-sink monstrosity which does nothing well.)

The "no attachments" rule on many mailing lists is not a Luddite thing, but a
quality filter. There is a strong inverse correlation between those who feel
that they can't communicate without images and fancy text formatting, and those
who have something useful or interesting to say. Less is more, and all that.

Images and HTML formatting also present an accessibility problem. At least one
of the posters to this list gives a few "tells" in the way they write which
suggest they are blind. Good luck doing text-to-speech on a JPEG.



Re: Synchronous serial Re: E-Mail Formats RE: Future of cctalk/cctech

2020-06-19 Thread Peter Corlett via cctalk
On Fri, Jun 19, 2020 at 12:21:14AM +0100, Pete Turnbull via cctalk wrote:
> [...] Some of the UK banking systems like HOBS survived using viewdata that
> way up to the end of the 1990s, and I still have at least a couple of 1275
> modems.

Hobbyists are still running Viewdata BBSes. Here's one connected to the
Internet and provided with a JavaScript client so you can log in and have a
poke around: http://fish.ccl4.org/java/.

Offering access to one's BBS via TCP/IP isn't really optional any more now that
many of us no longer have suitable analogue POTS lines to plug our old modems
into, what with a mobile being a better choice for most purposes. I think
(HS)CSD might have carried over from GSM into 3G, and it's even possible that
my tinpot telco would connect such a call, but the odds that I could convince
my mobile to make the call is pretty much zero. How do you enter AT commands on
an iPhone anyway?

Also, I resent paying per minute for low-bandwidth phone calls when I've got
unmetered VDSL.

I would write Viewdata clients in the nostalgia wave of the late 1990s and
early 2000s, as it was also a nice easy introduction into a new platform's
graphics and I/O subsystems. Maybe I'll do one in WebAssembly for old time's
sake.

> The idea was to use 1200 for the transmission from central computer to
> consumer, and the back channel for user responses/commands. Not many people
> type faster than 7.5cps.

That's 75WPM with the usual rule of thumb of six characters per word. I can
copy-type at about 75-85WPM, which would interact badly with a small FIFO on a
very basic terminal, what with that being an average and some words are typed
at a faster rate. Fortunately, I've never suffered a Viewdata terminal that
awful: the BBC Micro backed its 6850 UART and its 1-byte FIFO with a luxurious
192-byte software FIFO, for example. Having to stop for a sip of tea while the
buffer drains isn't so terrible.

Normally one would compose longer bits of text offline, of course, so that BT
would get the smallest pound of flesh possible. Definitely a company with the
"never mind the quality, feel the price" mentality, but that's all telcos for
you.



Re: E-Mail Formats RE: Future of cctalk/cctech

2020-06-18 Thread Peter Corlett via cctalk
On Thu, Jun 18, 2020 at 09:42:16AM +0100, Dave Wade via cctalk wrote:
[...]
> I wrote this as one dollar => $1.00
> This as one pound => $1
> And this as one euro => €1
> Lastly one cent => ¢1

This came over the wire as follows:

> Content-Type: text/plain; charset="utf-8"
> Content-Transfer-Encoding: quoted-printable
[...]
> I wrote this as one dollar =3D> $1.00
> This as one pound =3D> $1
> And this as one euro =3D> =E2=82=AC1
> Lastly one cent =3D> =C2=A21

IOW, it has the correct headers to unambiguously decode the text. Whether the
receiving software is competent enough to handle Q-P UTF-8 text is something
else entirely, especially if it's in an obscure or recently-added script or
symbol set where suitable fonts don't exist or haven't been installed, but your
example doesn't contain any difficult code points.

The correct Q-P UTF-8 encoding for "£" is "=C2=A3". (I've dealt with so much
broken software that I know this without looking it up.) It seems likely that
it got mangled by something at your end in whatever converts it to "modern"
(1993) email format.



Re: Attachments

2020-06-18 Thread Peter Corlett via cctalk
On Wed, Jun 17, 2020 at 10:50:20PM +0100, Rob Jarratt via cctalk wrote:
[...]
> Easy, pictures of unidentified components, sending out schematics that have
> been reverse engineered, documentation, pictures of scope traces when trying
> to find a fault, all sorts. I would agree on a size limit though.

The kind of size limit required to keep attachments small enough to not annoy
people who are not interested in them would be too low for this purpose. The
annoyance increases further when people with broken email clients (or who just
never bothered to learn their tools) include senders' attachments in their
replies.

A typical digicam or scanner produces multi-megabyte files. Reducing them in
size to fit within e.g. a 1MB limit would still cause the same level of
inconvenience to the sender as uploading it somewhere and posting a link as
well as reducing the quality and utility to those who are interested.

I also note an inverse relationship between the size of an email and the
quality of its contents.

Further, an orders-of-magnitude explosion in the resources used by this list
would reduce the number of people willing to host it. My shell server which I
use for mail is perhaps typical: it has a 20TB/month transfer cap which is
effectively infinite, but its 20GB disk would be eventually consumed by all of
those attachments kept forever in the list archives that people also want.



Re: Amiga Vendors?

2020-06-18 Thread Peter Corlett via cctalk
On Wed, Jun 17, 2020 at 05:39:44PM -0400, Ethan O'Toole via cctalk wrote:
[...]
> The early plasma TVs usually had BNC RGBHV inputs and such. They could take
> VGA in very easily. I'm pretty sure a PC would have been way easier to deal
> with and could reach much higher resolutions... without needing a DB-23
> connector :-)

Everything had RGB on this side of the Pond. There was a protectionist decree
that all TVs sold in France would have a SCART socket, but of course this just
meant that pan-European models sprouted SCART sockets and the French TV
industry was back to square one. Old standards never die, and the TV I bought
in 2018 has a SCART socket and would quite probably decode SECAM but I have no
SECAM sources to test it (and they'll even be rare in France these days).

DB-23 to SCART cables were (and still are) readily-available from anywhere that
has anything to do with the Amiga. Sometimes they had sawn-down DB-25 plugs
since DB-23 wasn't exactly a common connector even in the Amiga's heyday.



Re: history is hard

2020-05-27 Thread Peter Corlett via cctalk
On Tue, May 26, 2020 at 05:04:10PM -0700, Fred Cisin via cctalk wrote:
[...]
> also, the Amiga wrote track rather than sector at a time, so a sector write
> needed to be delayed until the track was ready to be written

And could therefore corrupt ten unrelated sectors from other files at the same
time. When it popped up "You MUST replace volume Empty in DF0:", it was not
messing about.

[...]
> computer/OS control of disk eject and power is what's needed to solve it.
> Either hardware locks, or very thorough (difficult) eductaion of users. If
> the user ASKS THE OS to eject the disk, then it can easily be delayed until
> safe to do it. Similar with power shutdown (which users are now familiar
> with)

There are SCSI commands for locking drives and performing eject and
contemporary operating systems do seem to use them. You'll mostly observe this
when using optical media because that's the only non-obsolete hardware left
which still supports them, and most of the time they're used for read-only
media anyway so it's somewhat moot.

I would be most intrigued to see what a hardware lock and soft-eject for a USB
key would look like.

> In addition, the performance improvement that SMARTDRV did of optimizing the
> sequence of multiple writes out of sequence (all directory sectors, THEN all
> disk sectors) was dangerous if there was an interruption (not necessarily
> just user) before it was finished.

Fortunately, there now exist robust filesystems which ensure that partial
writes are not visible and that only the last few seconds of uncommitted data
still in the write queue is lost. Unfortunately, these tend not to be used much
because they're "slow"[0] and/or because it's on removable media formatted with
a joke filesystem because of Windows.


[0] For anybody who values throughput over durability, may I recommend
/dev/null for the ultimate in performance?



Re: Microsoft open sources GWBASIC

2020-05-27 Thread Peter Corlett via cctalk
On Tue, May 26, 2020 at 02:19:41PM -0700, Yeechang Lee via cctalk wrote:
[...]
> Longstanding tradition in the British computers market.

> "*New Scientist* stated in 1977 that 'the price of an American kit in dollars
> rapidly translates into the same figure in pounds sterling by the time it has
> reached the shores of Britain'."
> —

It's better now, though. Price differences can be explained by delivery costs,
import duties, and VAT/sales tax. And in the case of 1977, middlemen who
exploit the difficulty in importing stuff oneself.

The USA is some sort of gravity well when it comes to postage. It's cheap-ish
to send stuff to it, but unreasonably expensive to send stuff from there. So
for a product actually made in the USA, USPS, UPS, etc all conspire to ratchet
the price up. Now that this stuff is mostly made in China, postage is mostly
independent of destination.

(I observe a similar but smaller effect for stuff crossing the North Sea, which
is also where where Royal Mail and PostNL apparently like to dump parcels
rather than hand over to their opposite number for delivery.)

Other than that, there is currently no EU import duty on computers. Countries
set their own VAT rates, which is generally around 20%. One difference here is
that the USA quotes prices exclusive of sales tax, whereas consumer prices are
quoted inclusive of VAT. So that's an apparent ~20% difference in sticker price
even for something that costs the same either side of the pond. B2B prices in
the EU are quoted exclusive of VAT ("ex-VAT") and are thus more comparable
like-for-like with USA prices.

UK VAT was 8% back in 1977, except for "petrol and some luxury goods" which was
12.5%. It's possible that computers were considered luxury goods, but since the
main purchasers back then would be businesses who effectively do not pay VAT,
this is moot. Businesses and consumers alike would still have to pay import
duties, which I suspect would have ben quite formidable back then.

These days, the ex-VAT price of mass-produced tech goods and similar generic
non-perishables seem to be pretty much the same across the world.

For example Amazon ASIN B07FNK6QMT is €149.99 in Germany (inc 19% VAT; €126.04
ex-VAT), €152.51 in the Netherlands (inc 21% VAT; €126.04 ex-VAT again), and
£139.99 (=€156.61) in the UK (inc 20% VAT; €130.51 ex-VAT). The same product
with a different ASIN is $139.99 (=€126.94 before sales tax) from the USA.

Oddly enough, I tend to import this sort of thing from Germany. I'll pass on
that particular Brexit Bonus.



Re: Microsoft open sources GWBASIC

2020-05-25 Thread Peter Corlett via cctalk
On Mon, May 25, 2020 at 04:17:25PM +0200, Liam Proven via cctalk wrote:
[...]
> So, yes, PETSCII lets you draw some stuff, but I was only about 12. It really
> wasn't enough to grab me for long, not for the price of a car.

If you prefer the price of your wheels to be around £205, there's this
just-released PET clone: https://www.thefuturewas8bit.com/mini-pet.html. It's
made using "proper" chips rather than an FPGA, and is a drop-in replacement for
the board in a dead PET.



Re: Microsoft open sources GWBASIC

2020-05-23 Thread Peter Corlett via cctalk
On Sat, May 23, 2020 at 01:24:01PM -0700, Fred Cisin via cctalk wrote:
> On Sat, 23 May 2020, Liam Proven via cctalk wrote:
[...]
>> • the Sinclair ZX Spectrum, which was cheaper & had a crappy keyboard,
> That was a keyboard??
> I thought that it was just a picture of a keyboard glued on, as a suggestion
> of a possible accessory to purchase.   :-)
> Besides, the bottom of the door scrapes it.

Sinclair's keyboards *were* glued-on :)

You perhaps forget that the UK was on the skids in the early 1980s and
working-class families had no chance of affording one of those fancy imported
American C64s. Around that time, my parents bought their house for £8,000. They
were hardly going to spend 5% of that on my Christmas present. I got a ZX81 and
would bloody well like it or lump it.

Uncle Clive had been making dubiously-cheap electronics using equal measures of
ingenious design and cutting one corner too many since the 1970s, so he was
well-placed to clean up in the more tight-fisted end of the UK computer market.

The posh kids got a BBC Micro because of a government push to put computers
into schools, which is why the C64 didn't really sell to the affluent either.
There were also some other weird British machines such as the Dragon 32 which
still seemed to be more common than the C64 yet barely merit a footnote in
history today.

The C64 was reasonably popular in (as-then) West Germany, because they still
had an economy unlike the UK; hence "Auf Wiedersehen, Pet". Further east, they
were doing cheap knock-offs of Sinclair machines because even those were far
too expensive when all you had were Ostmarks or worse.



Re: (V)HDL Toolsets

2020-05-21 Thread Peter Corlett via cctalk
On Thu, May 21, 2020 at 01:34:09PM +0200, Sytse van Slooten via cctalk wrote:
[...]
> So basically what it comes down to is Quartus or Vivado. I’ve kind of
> implicitly chosen Quartus, because the Altera based development boards tend
> to be a lot nicer and cheaper than the Xilinx based stuff. I haven’t even
> followed the upgrades from ISE to Vivado.

My understanding from when I was looing at FPGAs in ~2013 is that Xilinx make
better FPGAs than Altera (now Intel), but Altera's tools are better. Having had
the "joy" of using Altera's Quartus, I dread to think how terrible ISE must be.
>From a cursory check, Vivado appears to be just an rebranded newer version of
ISE rather than a fundamental change.

Quartus puts me in mind of the dark days of the 1980s with its expensive,
closed-source, and generally shoddy software development environments before
GNU came along and wiped them out. Good riddance do the lot of them.

Even the HDLs themselves are stuck in the 1980s. Verilog is described as being
C-like, but that's not exactly a compliment. VHDL is Ada-like or Pascal-like,
i.e. designed by a committee and/or academics who have definite opinions about
how other people should write code, but don't do much of it themselves.

There are at least finally some open-source HDLs banging about which have
incorporated useful ideas from the last four decades of language design and
thus be easier to create correct code. (Thich is a crucial difference from
"easier to create something which runs", which is C/Verilog's schtick.)
Unfortunately, because of the lack of documententation on the FPGA bitstreams,
the best they can do is be a source-to-source translator piped into the
proprietary tooling.



Re: HPE OpenVMS Hobbyist license program is closing

2020-03-10 Thread Peter Corlett via cctalk
On Tue, Mar 10, 2020 at 10:07:45AM +1100, Doug Jackson via cctalk wrote:
> At the end of the day there are three paths.

> 1.  Accept that HP doesn't give two hoots about hobbyists and patch the
> abandoned operating system to fix the problem.

Welcome to the eyepatch-and-parrot approach of the rest of us on closed-source
platforms which never had a hobbyist programme to start with and/or the IP has
been scattered to the winds and it's unclear who to approach for a licence.

> 2. Declare that we need to develop an open replacement.

There is FreeVMS, but there also doesn't seem to have been any progress on it
in the last decade and its domain has been lost and taken over by a squatter.

Writing an operating system is *hard*, way beyond a weekend's hacking which is
how most open source projects get going. Cloning an existing one is doubly-so
because it has to be bug-compatible.

Linux has taken thirty years to get this far. It's arguable what is "major" but
to a rough approximation, there are no good open source clones of other
operating systems of similar complexity: I'm aware of FreeDOS, AROS, EmuTOS and
a few others, but they're relatively simple.

> 3. Accept that HP actually owns the rights to our VAX 11/785 machines and
> arrange for them to be dropped off at their corporate headquarters because
> they can't do anything without software.

Will this "drop off" be by B-52? :)



Re: Looking for Extended Industry Standard Architecture Revision 3.10 Specification

2020-02-22 Thread Peter Corlett via cctalk
On Sat, Feb 22, 2020 at 12:47:44AM -0800, Ali via cctalk wrote:
> Does anyone have the "Extended Industry Standard Architecture Revision 3.10"
> specification either in printed/book form that they are willing to separate
> from or in some sort of electronic format ala PDF? I am mostly interested in
> the sections on the syntax for EISA CFG files. TIA!

I skim-searched a dodgy source known to contain weird and wonderful standards
documents and other tech references that are still under copyright so tend not
to be on the surface web. Sadly, the title you want didn't turn up, but "Eisa
System Architecture" (020140995X) did. It's not a standards document, but the
second half of chapter 9 discusses CFG files and gives an example including a
breakdown of what the fields mean, which might be enough to get you going.

The book is under seven bucks on Amazon if you're feeling honest and/or prefer
paper.



Re: Design flaw in the SCSI spec?

2020-01-08 Thread Peter Corlett via cctalk
On Wed, Jan 08, 2020 at 10:17:29AM -0800, Chuck Guzis via cctalk wrote:
> Before I go delving into my pile of SCSI X3T10 documentation and interface
> chip datasheets, exactly *which* flavor of SCSI are we talking about here?

Given the reference to the Amiga, almost certainly SCSI-1, i.e. 8 bit wide
single-ended HVD, clocked asynchronously at low single-digit MHz.

The A2091 (and thus presumably the A590) was a pretty hateful controller, but
the main sources of pain were its shoddy firmware and the limited Zorro-II bus
rather than the SCSI interface. Perhaps the third-party GVP controllers swapped
it around so the firmware was great but the SCSI side sucked.



Re: swtpc.com expired???

2019-11-07 Thread Peter Corlett via cctalk
On Thu, Nov 07, 2019 at 09:31:04AM -0200, Alexandre Souza via cctalk wrote:
> Last IP address of the server (71.91.242.107) also directs to a "it works"
> page, so the entire directory may have been deleted. I also tried to access
> subpages (like /Sinclair/Interface2/Interface/Interface2_Circuitry.htm) and
> got a 404.

That IP address is using name-based virtual hosting, and you can see the
content by sending a suitable Host: header and/or tweaking /etc/hosts.

> Seems everything is gone. Hope the archive.org backup is updated :(

I'm taking the liberty of mirroring it just in case. It appears to be on the
end of a bit of wet string, so it's entirely plausible that it is being moved
to a better hosting provider.



Re: Estate sale

2019-10-30 Thread Peter Corlett via cctalk
On Wed, Oct 30, 2019 at 02:21:07AM +, Mark Linimon via cctalk wrote:
> Any hints about where in the world this is?

Rule zero: if a location isn't given, it it almost certainly in the USA. Most
Americans think that the world ends at the US border, so this is a very safe
assumption.



Re: Anyone familiar with these vintage touchscreens?

2019-10-20 Thread Peter Corlett via cctalk
On Sat, Oct 19, 2019 at 02:23:46PM -0400, Nigel Johnson via cctalk wrote:
> Judging by the year, it was probably a teletext terminal. [...]

It's not Teletext, unless that word means something different on the other side
of the Pond. Teletext was basically a text system (the hint's in the name) with
graphics (and indeed colour) being a weird hack that gave it a particular
appearance, especially in typical implementations which used the SAA5050
character generator chip.

The palette and colour fringing suggest Apple II to me.



Re: TRS-80 Fireworks

2019-08-28 Thread Peter Corlett via cctalk
On Wed, Aug 28, 2019 at 07:07:21PM +1000, Guy Dunphy via cctalk wrote:
[...]
> RIFA caps may be the most hated components in electronics. Even worse than 
> dipped
> tantalums, popped electrolytics, and decaying urethane foam.

Amiga collectors would say "batteries", since Commodore selected a brand of
rechargable cells which leaked board-eating acid. This also affected the Acorn
Archimedes, but only posh kids had those.



Re: bit-slice and microcode discussion list

2019-08-23 Thread Peter Corlett via cctalk
On Thu, Aug 22, 2019 at 12:47:28PM -0500, Tom Uban via cctalk wrote:
[...]
> On a possible related note, I am looking for information on converting CISC
> instructions to VLIW RISC.

Do you mean the theoretical basis, or implementing it? And is this
ahead-of-time ("I want to run *this* binary"), or just-in-time ("I want to run
*any* binary, including self-modifying code")?

It's basically a compiler pipeline: deserialise the input code into an AST,
then serialise it into output code. It's just that the input code is actual
machine code rather than human-ented text.

Various real-world implementations exist. QEMU, for example. VMWare also does
it for ring-0 code if the host lacks VT-x. UAE definitely does it, and possiblt
so does MAME. As you can see, it's basically a solved problem as far as
computer science is concerned.

If you have a copy of the Dragon Book to hand, you may as well give it a
gander. The general concepts are timeless, but the actual nitty-gritty is only
useful if you are still living in the 1970s, so don't spend too much time in
the details of the algorithms because modern machines are so different that
many of the book's design assumptions are now invalidated. (I base this opinion
on my 1986 edition, although the TOC I've seen for the 2006 edition suggests
that it's been dragged kicking and screaming into the 1990s.)

There are *loads* of academic papers that you will have to wade through to
advance from the Dragon Book's description of a kinder era to modern compiler
design. Some of it remains an unsolved problem. You can see why the Dragon Book
handwaves over the hard bits.

To actually implement something that performs well and will actually be
finished before your new VLIW RISC hardware is obsolete, I recommend you look
at reusing existing compilers rather than implemting your own.

The daddy of backends is LLVM. Unless your VLIW RISC is already supported, you
get to learn how to implement an LLVM backend. It seems to be a common
undergraduate assignment to implement an LLVM backend for an arbitrary RISC CPU
(often MIPS) so you should be able to find myriad terrible implementations on
GitHub to draw inspiration from.

Another possibility is QEMU's TCG. I wasn't really aware of it until I did a
quick search when compising this response, but I like what I see and now want
to look much closer at it.

Once you've done that, you need to decompile your CISC code into your chosen
backend's IR. This involves a lot of tedious gruntwork, but is otherwise not
that difficult.

Have fun!



Re: Shipping from Europe to USA

2019-08-22 Thread Peter Corlett via cctalk
On Thu, Aug 22, 2019 at 06:30:10PM +, Henk Gooijen via cctalk wrote:
> A few weeks ago I shipped approx 39 kilos from The Netherlands to USA (HP
> A990). At least in Holland, most shippers do not accept such heavy stuff (max
> 30 kilos).

Yeah, well, "dat kan niet" *is* the Dutch motto. I'm surprised it's not on the
passport.

> Only UPS did … and yes, the “horror” stories *are* true. They managed to drop
> the package. Not from 4 inches above ground, but more, because a *steel
> corner* had a dent!

Hence that old joke: "If being air dropped out of a C-130 into a minefield
constitutes 'moderately rough handling', what constitutes 'very rough
handling'?" "Being shipped UPS".



  1   2   >