Re: bit-slice and microcode discussion list

2019-08-23 Thread Jon Elson via cctalk

On 08/23/2019 12:47 PM, Noel Chiappa via cctalk wrote:

 > From: Jon Elson

 >> On 08/22/2019 12:47 PM, Tom Uban via cctalk wrote:

 >> On a possible related note, I am looking for information on converting
 >> CISC instructions to VLIW RISC.

 > I think it might end up looking a bit like the optimizers that were
 > used on drum memory computers back in the dark ages.

I dunno; those were all about picking _addresses_ for instructions, such
that the next instruction was coming up to the heads as the last one
completed.


Right, but the idea is to schedule memory reads way in 
advance of when the datum is required for a calculation.  
So, the load from memory to register is moved way up in the 
program, and the use of the register is much later to allow 
for the memory latency.  Yes, it is not exactly like drum 
memory computers, but you are still scheduling things for 
when they can be done without causing a stall.


Jon


Re: bit-slice and microcode discussion list

2019-08-23 Thread dwight via cctalk
The concepts of bitslice coding and optimizing of it have always interested me. 
I'm not sure about the correlation to "CISC to VLIW RISC".
Dwight


From: cctalk  on behalf of Al Kossow via cctalk 

Sent: Friday, August 23, 2019 2:37 PM
To: cctalk@classiccmp.org 
Subject: Re: bit-slice and microcode discussion list



> On a possible related note, I am looking for information on converting
> CISC instructions to VLIW RISC.

I'm impressed, cctlk went completely off the rails on the first reply to the
list announcement, and has stayed there.

At least the list itself is staying on topic.



Re: bit-slice and microcode discussion list

2019-08-23 Thread Al Kossow via cctalk



> On a possible related note, I am looking for information on converting
> CISC instructions to VLIW RISC.

I'm impressed, cctlk went completely off the rails on the first reply to the
list announcement, and has stayed there.

At least the list itself is staying on topic.



Re: bit-slice and microcode discussion list

2019-08-23 Thread ben via cctalk

On 8/23/2019 12:00 PM, Paul Koning via cctalk wrote:




On Aug 23, 2019, at 1:47 PM, Noel Chiappa via cctalk  
wrote:


From: Jon Elson



On 08/22/2019 12:47 PM, Tom Uban via cctalk wrote:



On a possible related note, I am looking for information on converting
CISC instructions to VLIW RISC.



I think it might end up looking a bit like the optimizers that were
used on drum memory computers back in the dark ages.


I dunno; those were all about picking _addresses_ for instructions, such
that the next instruction was coming up to the heads as the last one
completed.

The _order_ of execution wasn't changed, there was no issue of contention
for computing elements, etc - i.e. all the things ones think of a
CISC->VLIW translation as doing.


Instruction ordering (instruction scheduling) is as old as the CDC 6600, though 
then it was often done by the programmer.

An early example of that conversion is the work done at DEC for "just in time" conversion 
of VAX instructions to MIPS, and later to Alpha.  I wonder if their compiler technology was 
involved in that.  It wouldn't surprise me.  The Alpha "assembler" was actually the 
compiler back end, and as a result you could ask it to optimize your assembly programs.  That was 
an interesting way to get a feel for what transformations of the program would be useful given the 
parallelism in that architecture.

paul


Why bother is my view. The problem is Three fold, a) The hardware people 
keep changing the internal
details. b) A good compiler can see the the original program structure 
and optimize for that. c) The flat memory model as from FORTRAN or LISP 
where variables are random over the entire memory space scrambles your 
cache.


With that said if you can make the optimization in defined some sort of 
MACRO format changing parameters would be simple and be effective unseen 
changes. Kind of the the early Compiler Compilers.



I see RISC as emulation of the HARVARD memory model.
A Harvard model would not take  much change in programing other than not 
having a "SMALL" mode. Two 32 bit wide buses (data) (program) could

be faster as external memory is more drum like with filling of caches
rather than random memory than one large data path doing everything.

I still favor the CLASSIC instruction set model. OP:AC:IX:OFFSET
Core Memory made the machines slow with the memory restore cycle, Giving 
rise to CSIC like the PDP 11 to give a better use of that dead cycle.

RISC is only fast because of the PAGE cycle of dynamic memory at
the time.

Too bad everything is all 8/16/32/64+ computing or say a 36 bit classic
style cpu design could run quite effective at a few GHZ.
Ben.










Re: Raspberry Pi write cycles

2019-08-23 Thread Zane Healy via cctalk


> On Aug 23, 2019, at 12:30 PM, John Klos via cctalk  
> wrote:
> 
> Any Pi processor newer than the original ARM1176JZ should run NetBSD pretty 
> well. My 900 MHz Pi 2 runs NetBSD/vax almost as fast as a VAXstation 4000/30 
> (VLC), which is about 5 VUPS. An original Pi or Pi Zero should be able to 
> emulate a VAX at least as fast as an 11/780.

When I was using an RPi2 for VAX emulation, I showed as having about 1.6 VUPS 
under OpenVMS 7.3.  I’ve since moved to a VMware cluster for my VAX emulation 
needs.  It can beat my fastest VAX.  

I use RPi3’s for PDP-10 and DPS-8 emulation, I haven’t tried them for VAX 
emulation.  I would like to try a RPi4 for VAX emulation.

Zane





Re: Raspberry Pi write cycles

2019-08-23 Thread John Klos via cctalk
But then it turned out not to be the load at all.  No matter what I ran 
on that Pi, it would corrupt its SD cards in a matter of weeks (the 
symptom was that the fourth bit of some bytes would just stick on).  I 
assume it was just something broken in the Pi itself.


You can simply root off of a USB disk by changing the "root=" parameter in 
cmdline.txt on the FAT partition on your SD card. Since the card won't 
otherwise be used unless you mount it if you do this, your next card 
should last forever. I've got a Suptronics x830 board and enclosure with 
an 8 TB drive which boots this way.


Any Pi processor newer than the original ARM1176JZ should run NetBSD 
pretty well. My 900 MHz Pi 2 runs NetBSD/vax almost as fast as a 
VAXstation 4000/30 (VLC), which is about 5 VUPS. An original Pi or Pi Zero 
should be able to emulate a VAX at least as fast as an 11/780.


One issue with CPU intensive things on Raspberry Pis is that even if your 
power supply provides plenty of current, the slightest drop in voltage can 
cause throttling. If you know your power supply is good but see a 
lightning symbol anyway, add "avoid_warnings=2" to config.txt on your SD 
card's FAT partition.


John


Re: KE11-A craze (Was: Current MANX location)

2019-08-23 Thread Ethan Dicks via cctalk
On Fri, Aug 23, 2019 at 2:34 PM Noel Chiappa via cctalk
 wrote:
> >> Speaking of KE11-A's, does anyone know what's behind the bidding wars
> >> on recent eBay KE11-A component board listings, e.g.:
> >>
> >>   https://www.ebay.com/itm/372685033144
>
> Must be two such people, though - I was neither of the top two bidders.
> Odd for there to be so much interest in them.

I have not tried testing mine and I was not one of the bidders.

They are so uncommon that I never expected the boards to come up for
auction so I haven't been looking for them.  I have a long way to go
before I'm likely to even take a stab at them - like probably not this
year.

The goal is to get a working KA11 by 2022 though.  I am still clearing
the queue to make the KA11 a focus for 2020 or 2021.

-ethan


Re: KE11-A craze (Was: Current MANX location)

2019-08-23 Thread Noel Chiappa via cctalk
> From: Ethan Dicks

>> Speaking of KE11-A's, does anyone know what's behind the bidding wars
>> on recent eBay KE11-A component board listings, e.g.:
>>
>>   https://www.ebay.com/itm/372685033144

> Perhaps someone has a broken KE11-A

Must be two such people, though - I was neither of the top two bidders.
Odd for there to be so much interest in them.

Noel


Re: Update: Shipping 50 lb computer from Zell am See, Austria to CA.

2019-08-23 Thread Joshua Rice via cctalk
I’ve had/am having a similar sized machine shipped from Bulgaria to the UK, 
twice. Board first, chassis second. It cost me roughly £60 all in.

I expect a machine taking that long a journey would be best done by ocean 
frieght. It’ll take longer than air mail, but a transatlantic flight with a 
machine that weight will cost a lot. 

If you have to get it air mailed, split it up into seperate loads. Shipping it 
in parts will cost less than one lot, and also reduce the risk of the whole lot 
being lost in the post. 

if you can, it’s probably more cost-effective to ship it in parts than as one 
whole lump. 

> On Aug 23, 2019, at 9:19 AM, steven stengel via cctalk 
>  wrote:
> 
> Well, I knew the computer, just not the city.
> 
> It's Zell am See, a small town in western Autria, far from everywhere it 
> seems.
> 
> The computer is a Datapoint 2200 - 50lbs, 10x19x20 inches.
> 
> I want to get it shipped to Calfornia, where I live.
> 
> The cheapest option is to just use local Austria mail, but max dimensions are 
> 60x60x100cm, or
> 23.5x23.5x40 inches. That would leave just 2-inches on each of two sides for 
> padding.
> 
> Best option - remove the plastic cover and mail it separately. Correct me if 
> I'm wrong, but the entire bottom of the computer seems to be a solid piece of 
> metal, like the Apple III = very strudy. The back is a giant metal heat sink.
> 
> I think it's do-able, do you?
> 
> Steve.



Re: bit-slice and microcode discussion list

2019-08-23 Thread Paul Koning via cctalk



> On Aug 23, 2019, at 1:47 PM, Noel Chiappa via cctalk  
> wrote:
> 
>> From: Jon Elson
> 
>>> On 08/22/2019 12:47 PM, Tom Uban via cctalk wrote:
> 
>>> On a possible related note, I am looking for information on converting
>>> CISC instructions to VLIW RISC.
> 
>> I think it might end up looking a bit like the optimizers that were
>> used on drum memory computers back in the dark ages.
> 
> I dunno; those were all about picking _addresses_ for instructions, such
> that the next instruction was coming up to the heads as the last one
> completed.
> 
> The _order_ of execution wasn't changed, there was no issue of contention
> for computing elements, etc - i.e. all the things ones think of a
> CISC->VLIW translation as doing.

Instruction ordering (instruction scheduling) is as old as the CDC 6600, though 
then it was often done by the programmer.

An early example of that conversion is the work done at DEC for "just in time" 
conversion of VAX instructions to MIPS, and later to Alpha.  I wonder if their 
compiler technology was involved in that.  It wouldn't surprise me.  The Alpha 
"assembler" was actually the compiler back end, and as a result you could ask 
it to optimize your assembly programs.  That was an interesting way to get a 
feel for what transformations of the program would be useful given the 
parallelism in that architecture.

paul



Re: bit-slice and microcode discussion list

2019-08-23 Thread Noel Chiappa via cctalk
> From: Jon Elson

>> On 08/22/2019 12:47 PM, Tom Uban via cctalk wrote:

>> On a possible related note, I am looking for information on converting
>> CISC instructions to VLIW RISC.

> I think it might end up looking a bit like the optimizers that were
> used on drum memory computers back in the dark ages.

I dunno; those were all about picking _addresses_ for instructions, such
that the next instruction was coming up to the heads as the last one
completed.

The _order_ of execution wasn't changed, there was no issue of contention
for computing elements, etc - i.e. all the things ones think of a
CISC->VLIW translation as doing.

Noel


Re: bit-slice and microcode discussion list

2019-08-23 Thread Jon Elson via cctalk

On 08/22/2019 12:47 PM, Tom Uban via cctalk wrote:

On 8/22/19 12:16 PM, Eric Smith via cctalk wrote:

On another mailing list, someone asked if there was any list specifically
about bit-slice design and microcoding. I don't know of one, so I've
created a new mailing list specifically for those topics:

 http://lists.brouhaha.com/mailman/listinfo/bit-slicers

The intent is for the list to cover technical discussion of bit-slice
hardware design and/or microcoding. In other words, discussion of
microcoding that doesn't use bit-slice hardware is fine.


On a possible related note, I am looking for information on converting
CISC instructions to VLIW RISC.


Wow, I think that ends up looking like a compiler, or at 
least the optimizing back end part of a compiler.  I worked 
a bit with a Trace Multiflow, and their optimizing back end 
was VERY slow, which I assume means it was a complex task to 
reorder all the atomic operations and pack into the long 
instruction words for best throughput.


I think it might end up looking a bit like the optimizers 
that were used on drum memory computers back in the dark ages.


Jon


Re: bit-slice and microcode discussion list

2019-08-23 Thread Peter Corlett via cctalk
On Thu, Aug 22, 2019 at 12:47:28PM -0500, Tom Uban via cctalk wrote:
[...]
> On a possible related note, I am looking for information on converting CISC
> instructions to VLIW RISC.

Do you mean the theoretical basis, or implementing it? And is this
ahead-of-time ("I want to run *this* binary"), or just-in-time ("I want to run
*any* binary, including self-modifying code")?

It's basically a compiler pipeline: deserialise the input code into an AST,
then serialise it into output code. It's just that the input code is actual
machine code rather than human-ented text.

Various real-world implementations exist. QEMU, for example. VMWare also does
it for ring-0 code if the host lacks VT-x. UAE definitely does it, and possiblt
so does MAME. As you can see, it's basically a solved problem as far as
computer science is concerned.

If you have a copy of the Dragon Book to hand, you may as well give it a
gander. The general concepts are timeless, but the actual nitty-gritty is only
useful if you are still living in the 1970s, so don't spend too much time in
the details of the algorithms because modern machines are so different that
many of the book's design assumptions are now invalidated. (I base this opinion
on my 1986 edition, although the TOC I've seen for the 2006 edition suggests
that it's been dragged kicking and screaming into the 1990s.)

There are *loads* of academic papers that you will have to wade through to
advance from the Dragon Book's description of a kinder era to modern compiler
design. Some of it remains an unsolved problem. You can see why the Dragon Book
handwaves over the hard bits.

To actually implement something that performs well and will actually be
finished before your new VLIW RISC hardware is obsolete, I recommend you look
at reusing existing compilers rather than implemting your own.

The daddy of backends is LLVM. Unless your VLIW RISC is already supported, you
get to learn how to implement an LLVM backend. It seems to be a common
undergraduate assignment to implement an LLVM backend for an arbitrary RISC CPU
(often MIPS) so you should be able to find myriad terrible implementations on
GitHub to draw inspiration from.

Another possibility is QEMU's TCG. I wasn't really aware of it until I did a
quick search when compising this response, but I like what I see and now want
to look much closer at it.

Once you've done that, you need to decompile your CISC code into your chosen
backend's IR. This involves a lot of tedious gruntwork, but is otherwise not
that difficult.

Have fun!



Update: Shipping 50 lb computer from Zell am See, Austria to CA.

2019-08-23 Thread steven stengel via cctalk
Well, I knew the computer, just not the city.

It's Zell am See, a small town in western Autria, far from everywhere it seems.

The computer is a Datapoint 2200 - 50lbs, 10x19x20 inches.

I want to get it shipped to Calfornia, where I live.

The cheapest option is to just use local Austria mail, but max dimensions are 
60x60x100cm, or
23.5x23.5x40 inches. That would leave just 2-inches on each of two sides for 
padding.

Best option - remove the plastic cover and mail it separately. Correct me if 
I'm wrong, but the entire bottom of the computer seems to be a solid piece of 
metal, like the Apple III = very strudy. The back is a giant metal heat sink.

I think it's do-able, do you?

Steve.


RE: bit-slice and microcode discussion list

2019-08-23 Thread Patrick Mackinlay via cctalk
Not precisely CISC to VLIW RISC, but in my opinion very cool and somewhat 
related.



https://gamozolabs.github.io/fuzzing/2018/10/14/vectorized_emulation.html




From: cctalk  on behalf of Tom Uban via cctalk 

Sent: Friday, August 23, 2019 12:47:28 AM
To: Eric Smith ; General Discussion: On-Topic and Off-Topic 
Posts 
Subject: Re: bit-slice and microcode discussion list

On 8/22/19 12:16 PM, Eric Smith via cctalk wrote:
> On another mailing list, someone asked if there was any list specifically
> about bit-slice design and microcoding. I don't know of one, so I've
> created a new mailing list specifically for those topics:
>
> http://lists.brouhaha.com/mailman/listinfo/bit-slicers
>
> The intent is for the list to cover technical discussion of bit-slice
> hardware design and/or microcoding. In other words, discussion of
> microcoding that doesn't use bit-slice hardware is fine.
>
On a possible related note, I am looking for information on converting
CISC instructions to VLIW RISC.

--tnx
--tom