[cctalk] Re: Mad Magazine latest issue about computers

2023-03-13 Thread Sean Conner via cctalk
It was thus said that the Great Tarek Hoteit via cctalk once stated:
> The latest issue of Mad Magazine (April 2023) is titled “MAD Takes Apart
> Technology”. The pages include reprints of past articles that relate to
> computers, such as “if computers are so brilliant” (Oct 1985), “13 things
> you never want to hear from a computer guy” (May 2005), various y2k, and
> some 50s/60s tech humor. I posted the cover photo here:
> https://ne.thote.it/@tarek/110018157647679272

  Does it include a reprint of Donald Knuth's article "Potrzebie System of
Weights and Measures"?

  -spc


[cctalk] Re: Computer Museum uses GreaseWeazle to help exonerate Maryland Man

2023-01-27 Thread Sean Conner via cctalk
It was thus said that the Great Steve Lewis via cctalk once stated:
> Regarding the 1940s high school yearbook article I mentioned:   I think it
> was 1942 - so the war was still hot.  The two boys dropped the typing class
> since they had signed up for the Service and had other training
> commitments.  

  My grandfather, who served in WWII [1], knew I had an interest in
computers, so he got me a portable typewriter (which I still have) and a
typing book (first published in 1923) so that I may learn how to type.  He
said that not only would it serve me well with computers, but also in the
military. [2].

  I do recall there being typing classes, both in middle and high school,
but never took it as a class.

  -spc

[1] He served in the Navy, on a sub, in the Pacific.  He never did talk
much about his service, although I do know that at least one sub he
served on was sunk by Japan.

[2] He probably felt that knowing how to type would keep me from the
front lines and most likely safe.  Can't blame him on that logic.



[cctalk] Re: AI applied to vintage interests

2023-01-16 Thread Sean Conner via cctalk
It was thus said that the Great Sellam Abraham via cctalk once stated:
> Chris,
> 
> Apparently, ChatGPT 3 was trained on a large codebase, and in the reviews
> I've watched, as well as in my own experience, it is amazingly astute at
> generating (usually) working code in just about any language you can think
> of, including assembly languages of various flavors.

  And some of that codebase included MS-DOS itself:

https://github.com/microsoft/MS-DOS

  -spc (It's only versions up to 2.1 though)



[cctalk] Re: what is on topic?

2022-12-21 Thread Sean Conner via cctalk
It was thus said that the Great Chris via cctalk once stated:
>  I just don't remember anyone declaring this to be an 8-bit list. Back
>  when I was a member no one said pc stuff was off topic. Which is why I
>  asked. And wasn't aware JW didn't own or run tne list anymore.

  One rule I remember (from the early 2000s) is that anything 10 years or
older is on-topic.  At that point, it was pretty much stuff up to around
1990 or there abouts.  I personally feel that MS-DOS is fine, and even
Windows up through 2.x is okay, but Windows 3 or higher is probably not a
fit for this list (aka, anything Wintel is not fine).

  As far as 1990 goes, that is now 30 years ago (nearly 33 in fact).  The
SGI I used in 1992 is probably on topic (as it was never mainstream, but a
cool machine nonetheless), but not a PC from 1992. 

  -spc



Re: interesting DEC Pro stuff on eBay

2022-04-21 Thread Sean Conner via cctalk
It was thus said that the Great Mike Katz via cctalk once stated:
> 
> I could spend pages just describing how the 68K chip just blows away the 
> 8086 considering they were both released at about the same time.

  Agree here.  I loved the 68K and have fond memories of writing programs in
it.  But while the x86 has been Frankensteined into 64 bits, I don't think I
can see the 68K ever being a 64-bit architecture.  I don't think there are
enough unused bits in the instruction formatting for that.

> For crying out loud the 6809 even though it only addresses 64K is a more 
> powerful processor than the 8086.  Even with the 8086 clocking faster 
> than the 6809.

  As much as I like the 6809 [1] I think you are overselling it vs. the
Intel 8086.  Both only support a single IRQ [2], but the 8086 has more
registers, and with the segmentation, you can have split code and data of
64K each.  On the flip side, the 6809 does have some sweet addressing modes
(especially indirect indexed) and easy position independent code (love
that).  

  -spc (And I've written assembly for all three architectures)

[1] I even wrote an emulator: https://github.com/spc476/mc6809

[2] Okay, technically, the 6809 also has a "fast" interrupt, which only
saves the PC and CC registers.


Re: 80286 Protected Mode Test

2021-03-06 Thread Sean Conner via cctalk
It was thus said that the Great Rob Jarratt via cctalk once stated:
> I have a DECstation 220 (Olivetti M250E) which is failing POST on a "simple
> test of the 80286 protected mode". It says in a service manual I have that
> for this test the CPU is set in the protected mode, the machine status word
> is checked to see whether it indicates the protected mode and then exits
> protected mode. This test seems to be failing. Is there any possible
> explanation for this other than a failed 80286 CPU? Could there be any
> external reason? This board suffered some battery leak damage. Clearly the
> 80286 is working well enough to execute this diagnostic and send some text
> to the screen, so it basically works.

  There might be damage to the keyboard controller that could cause the
issue.  Once the 80286 is in protected mode, there is no way to get out of
protected mode except via the RESET signal.  If I remember correctly, you
could program the keyboard controller to send a RESET signal to get out of
protected mode.  Also, the keyboard controller also managed the state of
address line A20, which is another important factor on PCs.

  -spc



Re: APL\360

2021-01-30 Thread Sean Conner via cctalk
It was thus said that the Great Bill Gunshannon via cctalk once stated:
> On 1/29/21 4:12 PM, Will Cooke via cctalk wrote:
> >
> >>On 01/29/2021 2:58 PM Fred Cisin via cctalk  wrote:
> >>
> >
> >>'=' and '==' makes possible what is probably the most common error, and
> >>which the compiler doesn't catch:
> >>if (x = 3) . . . /* sets x to 3 and gives TRUE for the condition */
> >>I imagine that there are probably some pre-processors that would return a
> >>WARNING for it.
> >>
> >
> >Modern Visual Studio and GCC both flag the "=" in a condition, I believe.  
> >But if you're shipping code with 260+ warnings, who would see one more.
> 
> And the problem here is really quite plain and simple.
> Why are you shipping code with any warnings?

  Because sometimes they aren't.  Example:

gcc -std=c99 -g -Wall -Wextra -pedantic -fPIC -g -shared -o lib/tcc.so 
src/tcc.c -ltcc
src/tcc.c: In function cclua_get_symbol':
src/tcc.c:528: warning: ISO C forbids assignment between function pointer and 
`void *'

  ISO C may forbid that, but POSIX requires it, and I'm compiling on a POSIX
system, but there isn't (to my knowledge) a way to state that.  Yes, I could
probably surpresss that one warning, but for me, it's easier to ignore on
POSIX systems.

  Also, have you tried clang with the highest warning level?  It's useless. 
I tried it once, only to be told "warning: struct foo has padding bytes
added" (or something to that effect).  Okay, so I pack the structure, only
to get "Warning: struct foo doesn't have any padding".  Yean, real useful
that.

  -spc



Re: APL\360

2021-01-29 Thread Sean Conner via cctalk
It was thus said that the Great Norman Jaffe via cctalk once stated:
> 
> It happened to me as well - I found hundreds of warnings in the code and,
> after getting permission to address them, I was fired 

  Wait ... you got *permission* and were still *fired*?  Have I just been
fortunate in where I've worked my entire career? [1]

> because 'we would
> have to recompile the Windows version due to the changes you made'; the
> source code was reverted to the state before I made the changes.  

  Wouldn't you have to recompile the Windows version for updates?  Or was
the company too cheap (or was unable to) run regression tests?

> I refuse
> to have their product on any system that I have involvement with...

  Can you name names?  Or do you need to protect yourself?

  -spc

[1] Possibly yes.  


Re: APL\360

2021-01-29 Thread Sean Conner via cctalk
It was thus said that the Great Fred Cisin via cctalk once stated:
> >Whenever I start a new job the first thing I do today is enable
> >-Werror; all warnings are errors. And I’ll fix every one. Even
> >when everyone claims that “These are not a problem”. Before
> >that existed, I’d do the same with lint, and FlexeLint when I
> >could get it.
> 
> On Fri, 29 Jan 2021, wrco...@wrcooke.net wrote:
> >That's exactly what I did and was then told I was likely to get fired for
> >it.  I left that job soon after.
> >"A person who never made a mistake never tried anything new." -- Albert
> >Einstein
> 
> 
> Similarly, "You don't have time to write comments as you go along.  You 
> can go back and add them in AFTER the program is working."  Of course, as 
> soon as it "seems to be working", "We're not paying you to mess with stuff 
> that's already DONE.  We have ANOTHER project that you have to get on 
> immediately."
> 
> It's not good to be in a job where they won't let you be thorough in error 
> checking nor let you write comments.

  I had a manager that told me not to be so pedantic about verifying the
incoming SIP messages because otherwise we (the Company) were going to go
out of business in six months if we didn't get the product OUT THE DOOR!

  I compromised on it---I didn't remove the checks, but for those that
didn't matter to our processing the message, I just logged the violation and
continued processing.  No one in the department was familiar with SIP at the
time so I figured tha was at least help us debug any issues.

  Of course, said manager left six months later because of burnout [1], and
two, I'm still at the same job five years later.

  -spc

[1] He was originally a developer who was forced into a management role,
something he was *very bad at*, and he still did development work,
not fully trusting anyone underneath him (nor operations, but that's
a whole other story).


Re: APL\360

2021-01-29 Thread Sean Conner via cctalk
It was thus said that the Great Will Cooke via cctalk once stated:
> 
> > On 01/29/2021 4:42 PM David Barto via cctalk  wrote:
> 
> > Whenever I start a new job the first thing I do today is enable -Werror;
> > all warnings are errors. And I’ll fix every one. Even when everyone
> > claims that “These are not a problem”. Before that existed, I’d do the
> > same with lint, and FlexeLint when I could get it.
> 
> That's exactly what I did.  I was promptly told I was likely to get fired
> for it.  

  WHY?  Why would you get fired for fixing warnings?  Would it make some
manager upstream look bad or something?  

  -spc


Re: Microsoft open sources GWBASIC

2020-05-23 Thread Sean Conner via cctalk
It was thus said that the Great Noel Chiappa via cctalk once stated:
> 
> "The [8008] was commissioned by Computer Terminal Corporation (CTC) to
> implement an instruction set of their design for their Datapoint 2200
> programmable terminal. As the chip was delayed and did not meet CTC's
> performance goals, the 2200 ended up using CTC's own TTL-based CPU instead."
> 
> The 8008 was started before the 4004, but wound up coming out after it. (See
> Lamont Wood, "Datapoint", pg. 73.) This is confirmed by its original name,
> 1201 - the 4004 was going to be named the 1202, until Faggin convinced
> Intel to name it the 4004.

  I found this Youtube video 

https://www.youtube.com/watch?v=g9_FYRAfyqQ

about the register set of the 4004, 8008, 8080, Z80, 8086 (and so on) to be
interesting.  I don't think it's 100% accurate, but it gives (in my opinion)
a decent overview of the history of the x86 register set.

  -spc



Re: State of New Jersey needs COBOL programmers

2020-04-05 Thread Sean Conner via cctalk
It was thus said that the Great Fred Cisin via cctalk once stated:
> >>Edsger Dijksta said, "The use of COBOL cripples the mind; its teaching 
> >>should, therefore, be regarded as a criminal offense."
> 
> On Sun, 5 Apr 2020, geneb wrote:
> >I'm pretty sure he said that about BASIC, and I'm totally bummed he died 
> >before I could bitch slap him over it. ;)
> 
> well, close.
> His BASIC quote is:
> "It is practically impossible to teach good programming to students that 
> have had a prior exposure to BASIC: as potential programmers they are 
> mentally mutilated beyond hope of regeneration."
> 
> Here is one copy of his 1975 paper, "How Do We Tell Truths That Might 
> Hurt":
> https://www.cs.virginia.edu/~evans/cs655/readings/ewd498.html
> 
> I don't know what language(s), if any, that he liked.

  Math.

  -spc (Had some comp-sci profs who didn't like programming or computers)




Re: Converting C for KCC on TOPS20

2019-12-10 Thread Sean Conner via cctalk
It was thus said that the Great David Griffith via cctalk once stated:
> 
> I'm trying to convert some C code[1] so it'll compile on TOPS20 with KCC. 
> KCC is mostly ANSI compliant, but it needs to use the TOPS20 linker, which 
> has a limit of six case-insentive characters.  Adam Thornton wrote a Perl 
> script[2] that successfully does this for Frotz 2.32.  The Frotz codebase 
> has evolved past what was done there and so 2.50 no longer works with 
> Adam's script.  So I've been expanding that script into something of my 
> own, which I call "snavig"[3].  It seems to be gradually working more and 
> more, but I fear the problem is starting to rapidly diverge because it 
> still doesn't yield compilable code even on Unix.  Does anyone here have 
> any knowledge of existing tools or techniques to do what I'm trying to do?

  If you are doing this on Linux, an approach is to compile the code there,
then run 'nm' over the object files, and it will output something like:

[spc]lucy:~/source/boston/src>nm main.o
00ef t CgiMethod
 U CgiNew
 r __PRETTY_FUNCTION__.21
 U __assert_fail
 U crashreport_core
 U crashreport_with
 U gd
 U gf_debug
 T main
 U main_cgi_get
 U main_cgi_head
 U main_cgi_post
 U main_cli

  The last column are identifiers; the second column is the type of
identifier, and the first column is the value.  What you want to look for
are types 'T' (externally visible function), 'C' (externally visible
constant data) and 'D' (externally visible data).  It is these identifiers
that will need to be six unique characters long.

Something like:

[spc]lucy:~/source/boston/src>nm globals.o  | grep ' [CDT] ' 
0041 T GlobalsInit
0004 C c_adtag
0004 C c_class
0004 D c_conversion
0004 C c_days
0004 C c_tzmin
 D c_updatetype
0004 C c_webdir
0008 D cf_emailupdate
0004 C g_L
0004 C g_blog
0004 C g_templates
0020 D gd
0d09 T set_c_conversion
0beb T set_c_updatetype
0dbd T set_c_url
0cab T set_cf_emailupdate

(but over all object files).  I would then generate unique six character
long identifiers for each of these, and slap the output into a header file
like:

#define GlobalsInit id0001
#define c_adtag id0002
#define c_class id0003
#define c_conversionid0004

and then include this file for every compilation unit.  I think that would
be the easiest thing to do.

  -spc



Re: Pleas ID this IBM system....

2019-05-21 Thread Sean Conner via cctalk
It was thus said that the Great Jay West via cctalk once stated:
> No modern datacenter that I have seen still uses a raised floor *OTHER
> THAN* about 3 inches for a ground plane. There is a reason for that... the
> old idea of forced cooling under the floor and mixing power & data cables
> there has been found to be a truly bad idea.

  I recall walking through the former IBM main site in Boca Raton [1] in the
late 90s with a few friends.  We were amazed to see labs with six-foot deep
pits---the tiles long gone and all cables removed.  It was quite the sight
to see.

  -spc

[1] Google street view:

https://www.google.com/maps/@26.3912323,-80.1059743,3a,75y,272.91h,88.2t/data=!3m7!1e1!3m5!1sLqyUfSPPBaTwXUGrMFebxQ!2e0!6s%2F%2Fgeo2.ggpht.com%2Fcbk%3Fpanoid%3DLqyUfSPPBaTwXUGrMFebxQ%26output%3Dthumbnail%26cb_client%3Dmaps_sv.tactile.gps%26thumb%3D2%26w%3D203%26h%3D100%26yaw%3D60.71677%26pitch%3D0%26thumbfov%3D100!7i13312!8i6656

The building is rated for a class-5 hurricane.  It's quite
impressive.



Re: DG One owners? I think I have something.

2019-05-15 Thread Sean Conner via cctalk
It was thus said that the Great Chuck Guzis via cctalk once stated:
> A kindly donor sent me an external numeric keypad from Data General.  It
> has the right keycaps and color for a DG One laptop.   Model number
> 2568.   Connection is via a 3-terminal plug; basically a miniature
> stereo headphone plug.
> 
> I'll give this up to a One collector who can identify this for certain.
> Otherwise, I'll probably repurpose it.   FWIW, it appears to be
> unsued--not even the rubber feet have dirt or wear.

  Ooh!  I'll have to check my DG One to see if there's a port on the laptop
for it.  I have the  laptop, printer and external floppy, this would be a
nice addition to it.

  -sp



Re: What do to with an Internet-connected PDP-11?

2019-04-29 Thread Sean Conner via cctalk
It was thus said that the Great ben via cctalk once stated:
> On 4/28/2019 11:34 PM, Cameron Kaiser via cctalk wrote:
> >>Maybe it would be possible to get a text only browser running?
> >
> >I think Gopher would be a better fit, personally. That's easy to write,
> >parse and display.
> >
> That might be true, but what sites still provide that service.

  There are quite a few active gopher sites out there.  You can start with:

gopher://gopher.floodgap.com/1/world

but wait!  There's more!

gopher://gopher.floodgap.com/1/fun/xkcd
gopher://gopher.floodgap.com/1/feeds/latest
gopher://gopher.altexxanet.org/1/textfiles.com
gopher://1436.ninja/1/Project_Gutenberg_in_Gopherspace
gopher://hngopher.com/
gopher://sdf.org/1/users/julienxx/Lobste.rs

and some phlogs (gopher blogs):

gopher://gopher.black/1/moku-pona
gopher://i-logout.cz/1/en/bongusta/

(although there's some overlap between these two).

  -spc



Re: IBM 3174 C 6.4 Microcode Disks?

2019-02-20 Thread Sean Conner via cctalk
It was thus said that the Great Kevin Monceaux via cctalk once stated:
> Grant,
> 
> On Sat, Feb 16, 2019 at 07:36:11PM -0700, Grant Taylor via cctalk wrote:
>  
> > If the 2513 you have is the one that was used for this, I'd love to see 
> > the config, if it's still on there.  That would very likely settle 
> > things for my curiosity.
> 
> I have the 2513 now.  I'm new to Cisco router commands and configuration.
> If you could give me a crash course on the commands that would display the
> parts of the configuration that would settle things for your curiosity, I'll
> see what it has.

  The Cisco "command line" is quite nice and will always show you what it's
expecting next when you press '?' at any point.  If I recall correctly,
"show config" will show the current saved configuration to the screen, and
"show running-config" will show the currently running configuration (it will
be different if you've made changes without saving them).  You don't even
need to type out the whole thing---just enough to disambiguate the command
(I think 'sh conf" is enough, maybe even "sh c").

  -spc



Re: OT Parts houses & scrappers

2019-01-26 Thread Sean Conner via cctalk
It was thus said that the Great Grant Taylor via cctalk once stated:
> On 1/26/19 6:26 PM, William Donzelli via cctalk wrote:
> >Learning how to judge scrap value is the first thing to do. Do research 
> >and gain experience.
> 
> That sounds all well and good.  Until you something unexpected and 
> unknown when you are at an auction for something else.  There's only so 
> much self education you can do on a smart phone 10 minutes before the 
> auction.
> 
> I've found the best policy is to be honest with people in such 
> situations.  If a scrapper is planing on getting $200 in raw materials, 
> I'm not going to waist anybody's time bidding $50.
> 
> Be polite, ask questions, don't take anything personally.  Try not to 
> insult people.

  I remember attending an auction at a local university some twenty years
ago [1] with some friends, hoping to score some computer equipment.  There
was a sizable crowd there, but two bidders stood out.  One was a guy there
with a young kid, and another was an older gentleman.  The older gentleman
had a bankroll and was *continuously* outbidding *everbody* and seemed
pissed that there were other bidders there [2].  The man with the young kid
started bidding on an old fusball table for his kid and the older gentleman
was determined to get it---drive the price up something fierce and I'm sure
he priced the other fusball table according to the driven up price at the
auction.

  He outbid me for a pile of equipment (I was really interested in one item,
not the lot as a whole).  I approached him afterwards, offering to buy the
one item and he blew me off, not wanting to bother to even part with the one
item right there, for cash.

  It ruined the auction for me, the father, and others I'm sure.

  -spc

[1] I was a former student at said university.

[2] How dare these civilians bid on stuff!  


Re: ELTRAN THE COMPILER ANY DOCS? (NOT THE SEMICONDUCTOR STUFF!)))

2019-01-13 Thread Sean Conner via cctalk
It was thus said that the Great ben via cctalk once stated:
> On 1/13/2019 2:06 PM, Chuck Guzis via cctalk wrote:
> 
> >Being an old hard-of-seeing guy myself, I much prefer mixed-case to
> >all-caps.  All caps destroys the "shape" of words.
> 
> Where is the $%!@ codepage for the ASR-33. Get all caps with a REAL keyboard
> and better text (with a new ribbon) than a tiny 24x80 console window on
> 1280 x 1024 screen.

  I configured a console window setting with a 36 pt font.  It's *real easy*
to read, even from across the room on my 1920x1178 display.

  -spc (the next font size, 48 pt, doesn't fit 80x24 on the screen ...)



Re: ELTRAN THE COMPILER ANY DOCS? (NOT THE SEMICONDUCTOR STUFF!)))

2019-01-13 Thread Sean Conner via cctalk
It was thus said that the Great John Ball via cctalk once stated:
> >ELTRAN THE COMPILER 
> >ANY DOCS? ANY ONE? USED IT?
> >(NOT THE SEMICONDUCTOR STUFF!))
> >
> >ED#
> 
> Hey ed, you might want to check your Caps Lock key there, bud. ;)

  My dad used to do that.  At the time, I thought it was because AOL (where
he checked email) automatically upper cased all emails.  A few years ago he
got a new computer, a Chromebook, and he immediately bitched that he
couldn't send out emails in all caps.

What?

  Yup.  He did it intentionally because, as he said, most of his friends
were hard of seeing, and by using ALL CAPS the letters were bigger and thus,
easier to read.  But the new Chromebook removed the CAPSLOCK key.

  The fact that most email clients [1] could use a larger font was lost on
him.

  -spc

[1] Unless you are using a physical terminal or running a PC in text
mode only.


Re: Want/Available list

2018-12-20 Thread Sean Conner via cctalk
It was thus said that the Great Chris Hanson via cctalk once stated:
> On Dec 20, 2018, at 12:03 PM, Carlo Pisani via cctalk  
> wrote:
> > 
> > a forum with a bazaar should be more appropriate
> > frankly this mail list looks like spam, and it's going irritating
> > since it's difficult to follow and to handle
> 
> What do you mean by this? Do you mean you would prefer to visit a web page
> to read the latest posts on cctalk rather than have them delivered to you
> via email?
> 
> I often find it fascinating that people talk about email lists as if
> they’re some huge inconvenience or imposition. Just set up rules to file
> them to their own mailboxes, read them when you feel like it, they work
> great.

  You are assuming they are using software that allows filtering, and that
they know about the filtering capability.  Also, there are people who
consider one email per week as being "way too busy---stop sending so much".

  -spc (Then there are the people who consider email as a To-Do list, you
seriously use email for *communication?* How quaint ... )



Re: PET peve thing... Editors

2018-12-12 Thread Sean Conner via cctalk
It was thus said that the Great allison via cctalk once stated:
> On 12/12/2018 03:04 PM, Sean Conner via cctalk wrote:
> > It was thus said that the Great allison via cctalk once stated:
> >> The whole thing comes from a project for myself... 
> >> I wanted a very basic screen based editor written in 8080/8085/z80 asm
> >> and compact
> >> (as in under 4K).  I figured first lets inquire of the Internet to see
> >> if I need to and code exists...
> >   I remember typing in TED.ASM from one of the PC magazines in the late 
> > 80s. 
> > Yes, it's for MS-DOS, but:
> >
> > 1) The 8086 is somewhat, kind of, source compatible with the
> > 8080/Z80 (if you squint hard enough)
> 
> Your not serious?  Z80 or 8080 to 8086 is not too bad but the other way
> is plain nuts.

  I learned assembly on the 6809, then the 8086 (technically the 8088). I've
always heard that it was designed to make porting code from the 8080/Z80
easy.  But I never really learned the assembly for the 8080/Z80.  I only
mentioned it because I think (if I recall) TED.COM was limited to editing
around 60K or so (one segment's worth of memory).

  But I can see it won't fit your needs.

  -spc



Re: PET peve thing... Editors

2018-12-12 Thread Sean Conner via cctalk
It was thus said that the Great allison via cctalk once stated:
> The whole thing comes from a project for myself... 
> I wanted a very basic screen based editor written in 8080/8085/z80 asm
> and compact
> (as in under 4K).  I figured first lets inquire of the Internet to see
> if I need to and code exists...

  There is this page:

http://www.texteditors.org/cgi-bin/wiki.pl?CPMEditorFamily

  That might have what you want.

  -spc (On the basis that CP/M ran on the 8080/Z80 CPU ... )




Re: PET peve thing... Editors

2018-12-12 Thread Sean Conner via cctalk
It was thus said that the Great allison via cctalk once stated:
> The whole thing comes from a project for myself... 
> I wanted a very basic screen based editor written in 8080/8085/z80 asm
> and compact
> (as in under 4K).  I figured first lets inquire of the Internet to see
> if I need to and code exists...

  I remember typing in TED.ASM from one of the PC magazines in the late 80s. 
Yes, it's for MS-DOS, but:

1) The 8086 is somewhat, kind of, source compatible with the
8080/Z80 (if you squint hard enough)

2) It was 3K when assembled into TED.COM

3) Was full screen.  And quite basic.  

  I'm not sure how hard it would be to translate it to 8080/Z80.

  -spc



Re: Text encoding Babel.

2018-11-30 Thread Sean Conner via cctalk
It was thus said that the Great Guy Dunphy via cctalk once stated:
> 
> Anyway, back on topic (classic computing.) Here's an ascii chart with some
> control codes highlighted.
> 
>   http://everist.org/ASCII/ascii_reuse_legend.png
> 
> I'm collecting all I can find on past (and present) uses of the control
> codes. Especially the ones highlighed in orange. Not having a lot of
> success in finding detailed explanations, beyond very brief summaries in
> old textbooks.
> 
> Note that I'm mostly interested in code interpretations in communications
> protocols. Their use in local file encodings not so much, since those are
> the domain of legacy application software and wouldn't clash with
> redefinition of what the codes do, in future applications.

  I've found this page:

http://www.aivosto.com/articles/control-characters.html

to be helpful in describing all the control codes as defined by ANSI (the C0
set from 0x00 to 0x1F and 0x7F) and by the ISO (the C1 set from 0x80 to
0x9F).  I also found reading the ECMA-48 standard to be helpful for the C1
set as well (as well as understanding web pages that supposedly describe so
called ANSI terminal codes, which are really ECMA-48 codes).

  -spc



Re: Text encoding Babel. Was Re: George Keremedjiev

2018-11-30 Thread Sean Conner via cctalk
It was thus said that the Great Keelan Lightfoot via cctalk once stated:
> > I see no reason that we can't have new control codes to convey new
> > concepts if they are needed.
> 
> I disagree with this; from a usability standpoint, control codes are
> problematic. Either the user needs to memorize them, or software needs
> to inject them at the appropriate times. There's technical problems
> too; when it comes to playing back a stream of characters, control
> characters mean that it is impossible to just start listening. It is
> difficult to fast forward and rewind in a file, because the only way
> to determine the current state is to replay the file up to that point.

  [ and further down the message ... ]

> I'm going to lavish on the unicode for this example, so those of you
> properly unequipped may not see this example:
> 
> foo := 푡ℎ푖푠 푖푠 푎 푠푡푟푖푛푔 혁헵헶혀 헶혀 헮 헰헼헺헺헲헻혁
> printf(푡ℎ푒 푠푡푟푖푛푔 푖푠 ① 푖푠푛푡 푡ℎ푎푡 푒푥푐푖푡푖푛푔, foo)
> if 혁헵헶혀 헶혀 헮 헽헼헼헿헹혆 헽헹헮헰헲헱 헰헼헺헺헲헻혁 foo ==
> 푡ℎ푖푠 푖푠 푎푙푠표 푎 푠푡푟푖푛푔, 푏푢푡 푛표푡 푡ℎ푒 푠푎푚푒
> 표푛푒 { 혁헵헶혀 헶혀 헮헹혀헼 헮 헰헼헺헺헲헻혁
> ...
> 
> An atrocious example, but a good demonstration of my point. If I had a
> toggle switch on my keyboard to switch between code, comment and
> string, it would have been much simpler to construct too!

  Somehow, the compiler will have to know that "푡ℎ푖푠 푖푠 푎 푠푡푟푖푛푔" is a
string while "혁헵헶혀 헶혀 헮 헰헼헺헺헲헻혁" is a comment to be ignored.  You lamented
the lack of a toggle switch for the two, but existing langauges, like C,
already have them, '"' is the "toggle" for strings, while '/*' and '*/' are
the toggles for comment (and now '//' if you are using C99).  It's still
something you have to "type" (or "toggle" or "switch" or somehow indicate
the mode).

  The other issue is now such inforamtion is stored, and there, I only see
two solutions---in-band and out-of-band.  In-band would be included with the
text.  Something along the lines of (where  is the ASCII ESC character
27, and this is an example only):

foo := _this is a string\ ^this is a comment\
printf(_the string is [1p isn't that exciting\,foo)

  But this has a problem you noted above---it's a lot harder to seek through
the file to arbitrary positions.  Grant Taylor stated another way of doing
this:

> What if there were (functionally) additional bits that indicated various
> other (what I was calling) stylings?
> 
> I think that something along those lines could help avoid a concern I
> have.  Namely how do search for an A, what ever ""style it's in.  I
> think I could hypothetically search for bytes ~> words (characters)
> containing ( ) () 01x1 (assuming that the
> proceeding don't cares are set appropriately) and find any format of A,
> upper case, lower case, bold, italic, underline, strike through, etc.

  There are several problems with this.  One, how many bits do you set aside
per character?  8?  16?  There are potentially an open ended set of stylings
that one might use.  Second problem---where do you store such bits?  Not to
imply this is a bad idea, just that there are issues that need to be
resolved with how things are done today (how does this interact with UTF-8
for instance?  Or UCS-4?).

Then there's out-of-band storage, which stores such information outside the
text (an example---I'm not saying this is the only way to store such
information out-of-band):

foo := this is a string this is a comment
printf(the string is 1 isn't that exciting,foo)

---

string 8-23
string 50-63
string 65-84
replacement 64
comment 25-41

  This has its own problems---namely, how to you keep the two together.  It
will either be a separate file, which could get separated, or part of the
text file but then you run into the problem of reading Microsoft Word files
cira 1986 with today's tools.  

  -spc (I like the ideas, but the implementations are harder than it first
appears ... )


Re: Text encoding Babel. Was Re: George Keremedjiev

2018-11-27 Thread Sean Conner via cctalk
It was thus said that the Great Fred Cisin via cctalk once stated:
>
> >>I like the C comment example; Why do I need to call out a comment with
> >>a special sequence of letters? Why can't a comment exist as a comment?
> 
> Why not a language even more self-documenting than COBOL, wherein the main 
> body is text, and special markers to identify the CODE that corresponds?

  In the book _Programmers at Work_ there's a picture of a program Jef
Raskin [1] wrote that basically embeds BASIC into a word processor document.

  -spc

[1] He started the Macintosh project at Apple.  It was later taken over
by Steve Jobs and taken in a different direction.


Re: Text encoding Babel. Was Re: George Keremedjiev

2018-11-27 Thread Sean Conner via cctalk
It was thus said that the Great Keelan Lightfoot via cctalk once stated:
> I'm a bit dense for weighing in on this as my first post, but what the heck.
> 
> Our problem isn't ASCII or Unicode, our problem is how we use computers.
> 
> Going back in time a bit, the first keyboards only recorded letters
> and spaces, even line breaks required manual intervention. As things
> developed, we upgraded our input capabilities a little bit (return
> keys! delete keys! arrow keys!), but then, some time before graphical
> displays came along, we stopped upgrading. We stopped increasing the
> capabilities of our input, and instead focused on kludges to make them
> do more. We created markup languages, modifier keys, and page
> description languages, all because our input devices and display
> devices lacked the ability to comprehend anything more than letters.
> Now we're in a position where we have computers with rich displays
> bolted to a keyboard that has remained unchanged for 150 years.

  Do you have anything in particular in mind?

> Unpopular opinion time: Markup languages are a kludge, relying on
> plain text to describe higher level concepts. TeX has held us back.
> It's a crutch so religiously embraced by the people that make our
> software that the concept of markup has come to be accepted "the way".
> I worked with some university students recently, who wasted a
> ridiculous amount of time learning to use LaTeX to document their
> projects. Many of them didn't even know that page layout software
> existed, they thought there was this broad valley in capabilities with
> TeX on one side, and Microsoft Word on the other. They didn't realize
> that there is a whole world of purpose built tools in between. Rather
> than working on developing and furthering our input capabilities,
> we've been focused on keeping them the same. Markup languages aren't
> the solution. They are a clumsy bridge between 150 year old input
> technology and modern display capabilities.
> 
> Bold or italic or underlined text shouldn't be a second class concept,
> they have meaning that can be lost when text is conveyed in
> circa-1868-plain-text. 

  But I can still load and read circa-1968-plain-text files without issue,
on a computer that didn't even exist at the time, using tools that didn't
exist at the time.  The same can't be said for a circa-1988-Microsoft-word
file.  It requires either the software of the time, or specialized software
that understands the format.

> I've read many letters that predate the
> invention of the typewriter, emphasis is often conveyed using
> underlines or darkened letters. We've drawn this arbitrary line in the
> sand, where only letters that can be typed on a typewriter are "text",
> Everything else is fluff that has been arbitrarily decided to convey
> no meaning. I think it's a safe argument to make that the primary
> reason we've painted ourselves into this unexpressive corner is
> because of a dogged insistence that we cling to the keyboard.

  There were conventions developed for typewriters to get around this. 
Underlining text indicated italicized text (if the typewriter didn't have
the capability---some did).

  In fact, typewriters have more flexibility than computers do even today. 
Within the restriction of a typewriter (only characters and spaces) you
could use the back-space key (which did not erase the previous
character) and re-type the same character to get a bold effect.  You could
back-space and hit the underscore to get underlined text.  You could
back-space and hit the ` key to get a grave accent, and the ' to get an
acute accent.  With a bit more fiddling with the back-space and adjusting
the paper via the platten, you could get umlauts (either via the . or '
keys).

  I think the original intent of the BS control character in ASCII was to
facilitate this behavior, but alas, nothing ever did.  Shame, it's a neat
concept.

> I like the C comment example; Why do I need to call out a comment with
> a special sequence of letters? Why can't a comment exist as a comment?

  The smart-ass answer is "because the compiler only looks at a stream of
text and needs a special marker" but I get the deeper question---is a plain
text file the only way to program?

  No.  There are other ways.  There are many attempts at so-called "visual
languages" but none of them have been used to any real extent.  Yes, there
are languages like Visual Basic or Smalltalk, but even with those, you still
type text for the computer to run.

  The only really alternative programming language I know of is Excel. 
Seriously.  That's about the closest thing you get to a comment existing as
a comment without special markers, because you don't include those as part
of the program (specifically, you will exclude those cells from the
computation least you get an error).

> Why is a comment a second class concept? When I take notes in the
> margin, I don't explicitly need to call them out as notes. This
> extends to 

Re: Text encoding Babel. Was Re: George Keremedjiev

2018-11-27 Thread Sean Conner via cctalk
It was thus said that the Great Grant Taylor via cctalk once stated:
> On 11/27/2018 04:43 PM, Keelan Lightfoot via cctalk wrote:
> >
> >Unpopular opinion time: Markup languages are a kludge, relying on plain 
> >text to describe higher level concepts.
> 
> I agree that markup languages are a kludge.  But I don't know that they 
> require plain text to describe higher level concepts.
> 
> I see no reason that we can't have new control codes to convey new 
> concepts if they are needed.
> 
> Aside:  ASCII did what it needed to do at the time.  Times are different 
> now.  We may need more / new / different control codes.
> 
> By control codes, I'm meaning a specific binary sequence that means a 
> specific thing.  I think it needs to be standardized to be compatible 
> with other things -or- it needs to be considered local and proprietary 
> to an application.

  [ snip ]

> I don't think of bold or italic or underline as second class concepts. 
> I tend to think of the following attributes that can be applied to text:
> 
>  · bold
>  · italic
>  · overline
>  · strike through
>  · underline
>  · superscript exclusive or subscript
>  · uppercase exclusive or lowercase
>  · opposing case
>  · normal (none of the above)

  But there are defined control codes for that (or most of that list
anyway).  It's not ANSI, but an ISO standard.  Let's see ... 

^[[1m bold
^[[3m italic
^[[53m overline
^[[9m strike through
^[[4m underline
^[[0m normal

  The superscript/subscribe could be done via another font

^[[11m ... ^[[19m

  Maybe even the opposing case case ... um ... yeah.

  By the way, ^[ is a single character representing the ASCII ESC character
(27).  

> I see no reason that the keyboard can't have keys / glyphs added to it.
> 
> I'm personally contemplating adding additional keys (via an add on 
> keyboard) that are programmed to produce additional symbols.  I 
> frequently use the following symbols and wish I had keys for easier 
> access to them:  ≈, ·, ¢, ©, °, …, —, ≥, ∞, ‽, ≤, µ, 
> ≠, Ω, ½, ¼, ⅓, ¶, ±, ®, §, ¾, ™, ⅔, ¿, ⊕.

  Years ago I came across an IBM Model M keyboard that had the APL character
set on the keyboard, along with the normal characters one finds.  I would
have bought it on the spot if it weren't for a friend of mine who saw it 10
seconds before I did.

  I did recently get another IBM Model M keyboard (an SSK model) that had
additional labels on the keys:

http://boston.conman.org/2018/10/31.2

The nice thing about the IBM Model M is the keycaps are easy to replace.

> I will concede that many computers and / or programming languages do 
> behave based on text.  But I am fairly confident that there are some 
> programming languages (I don't know about computers) that work 
> differently.  Specifically, simple objects are included as part of the 
> language and then more complex objects are built using the simpler 
> objects.  Dia and (what I understand of) Minecraft come to mind.

  You might be thinking of Smalltalk.

  -spc



Re: Text encoding Babel. Was Re: George Keremedjiev

2018-11-25 Thread Sean Conner via cctalk
It was thus said that the Great Bill Gunshannon via cctalk once stated:
> 
> On 11/25/18 5:42 PM, Grant Taylor via cctalk wrote:
> > On 11/23/18 5:52 AM, Peter Corlett via cctalk wrote:
> >> Worse than that, it's *American* ignorance and cultural snobbery 
> >> which also affects various English-speaking countries.
> >
> > Please do not ascribe such ignorance with such a broad brush, at least 
> > not without qualifiers that account for people that do try to respect 
> > other people's cultures.
> >
> >
> Q.  What do you call someone who speaks three languages?
> 
> A. Trilingual.
> 
> Q.  What do you call someone who speaks two languages?
> 
> A. Bilingual.
> 
> Q.  What do you call someone who speaks one language?
> 
> A. American.

  As an American, a friend of mine from Sweden (who himself speaks at least
three languages) considered me multilingual.  Of course, my other languages
are BASIC, Assembly, C, Forth ...

  I even heard of a high school in Tennessee who said computer languages
fulfill the "foreign language requirements" ... who'da thunk?

> OK, it's a joke. (I'm American and speak 4 languages.)

  -spc (Who speaks English and perhaps a dozen words in German, but
plenty of computer languages ... )



IEFBR14 (was Re: IND$FILE)

2018-11-19 Thread Sean Conner via cctalk
It was thus said that the Great jim stephens via cctalk once stated:
> 
> IFBR14 if you all are not familiar with MVS / MVT batch programming is a 
> program which immediately terminates w/o any return codes by doing an 
> assembly language return to the caller of the job step via the contents 
> of R14 of the processor, which is also the return address.

  I've always been amused by IEFBR14 ever since I heard about it.  I first
came across it by this quote:

Every program has at least one bug and can be shortened by at least
one instruction---from which, by induction, one can deduce that
every program can be reduced to one instruction which doesn't work.

IEFBR14 was this program---one instruction long, and it contained a bug:

http://en.wikipedia.org/wiki/IEFBR14

  -spc (The fix doubled the size of the program---such bloat!)


Re: Object-oriented OS [was: Re: Microsoft-Paul Allen]

2018-10-29 Thread Sean Conner via cctalk
It was thus said that the Great Tomasz Rola via cctalk once stated:
> Ok guys, just to make things clearer, here are two pages from wiki:
> 
> https://en.wikipedia.org/wiki/Object-oriented_operating_system
> 
> https://en.wikipedia.org/wiki/Object-oriented_programming
> 
> What I was thinking back at the time of premiere: classes, objects
> derived from the classes, user able to make his own object from
> system-provided class or define class of his own, or define his own
> class and inherit from other class, including system-provided one.

  Yes, the typical method of OOP, which is not what Alan Kay had in mind
when he developed Smalltalk:

I thought of objects being like biological cells and/or individual
computers on a network, only able to communicate with messages (so
messaging came at the very beginning -- it took a while to see how
to do messaging in a programming language efficiently enough to be
useful).
-- Alan Kay

Basically, very small programs that talk to each other to do stuff.  Today
this is known as "microservices" [1].

> Examples:
> 
>   - an object pretends to be a disk object, but is double-disk
> partition or zip file

  You can do this now on Linux using the FUSE driver---this allows a user
process to service file system requests.  To bring this back to topic, QNX
(from the 80s) and Plan-9 (from the 90s) also allowed this, and were easier
to use than FUSE (to be honest).  But of not one of Linux, QNX or Plan-9
could be considered an "object oriented operating system."  Fancy that.

>   - an object pretends to be file object but in fact it is a
> composition of few different files, mapped into virtual file-like
> object (so as to avoid costly copying)

  I think I would classify this as a variant on the first example.  

>   - an object says it is a printer but is a proxy, connected via
> serial-line object to another such serial-line object on remote
> computer where the real printer sits (connected via parallel, as
> usually)

  Under QNX, this was a trivial operation.  It was probably pretty trivial
under Plan-9 (depending upon how they handled printer queues).  Unix
(including Linux) has something like this, but the name escapes me since I
do almost no printing what-so-ever.  

  I do know that of the last N printers I've had [2] (actually, my
girlfriend has---she's the one who prints more than I do) were all "plug on,
turn on, oh look!  The computer found it on the network---print!"

>   - object with execution thread, aka active object (in 199x
> nomenclature -> aka process), can be serialized and migrated to
> another computer without big fuss either via system provided
> migration service or via (really easy to write in such setup)
> user's own
>   - same active object, serialized and stored to file because I gotta
> go home and have to turn computer off, so I can resurrect it next
> morning

  These two are related, and the later one is actually a bit easier to
accomplish if you are doing it to all the processes on a computer.  The
major problem though is dealing with resources other than the CPU.  For
instance, a process has an open file its working with when it's migrated. 
Problems I see right off the bat:

* Does the file exist on the destination?
* If it does, does the process have permissions?
* Now you have issues with syncing the file to the destination.

And if the file is served off a network, then you have issues with network
connections---i.e. they break!  (Which is an issue today with hibernation)
You can't reliably provide this functionality invisibly---you have to pretty
much involve the process in question in the process.

> Plus, some kind of system programming language - I had no idea what
> Smalltalk was and I still have no idea but I might have swallowed
> that.

  An object-oriented image based programming language.  Except for some
critical routines written in assembler, the entire system (operating system,
compiler, GUI, utilities, etc.) were all written in Smalltalk and every
"object" could be inspected and modified at runtime (including the operating
system, compiler, GUI, utilties, etc.).  

> I think it was possible to have this. But, not from MS. And as time
> shows, not from anybody.

  Citation needed.  

> On Mon, Oct 22, 2018 at 07:34:32PM -0700, Chris Hanson wrote:
>  
> > A lot of Windows 95 is implemented using COM, which is probably
> > where the description of it as “object-oriented” comes from.
> 
> Well, I am not going to bet my money on this. What you wrote might be true
> but I would like something, say a blog or article, in which author shows
> how I can count those COM objects.

  How many threads are running?  There's your count.

> I tried to verify your statement and the earliest Windows which could be
> claimed to be built from many COMs was Windows 8. But the truth is, I have
> departed from 

Re: modern stuff

2018-10-24 Thread Sean Conner via cctalk
It was thus said that the Great ben via cctalk once stated:
> On 10/24/2018 12:31 PM, Paul Koning wrote:
> 
> >It's true that the original 8086 instruction set lives on with all its 
> >warts, and many more added over the years.  And yes, I guess that you 
> >*can* run them in 32 bit segmented mode if you're crazy.  But that's not 
> >how they are actually used.  The same applies to other successful 
> >architectures: MIPS, IBM 360.  Or programming languages -- consider C for 
> >a particularly horrid example, or worse yet C++.
> 
> All the computer science books push RISC now. EVEN KUTH has gone to the 
> DARK SIDE.

  The first RISC chips appears in the 80s, making them over 30 years old
now.  Even the MIPS and SPARC architecture (RISC based) are nearly (if not
already) 30 years old (I used systems with both in the early 90s).

  If anything, the DARK SIDE won in that we seem to be perpetually stuck
with a glorified 8080 (that is so complex that it contains an additional,
embedded no-quite-so-glorified 8080 to help it boot up! [1]).

  -spc (It kind of reminds of the MCP from TRON ... )

[1] https://en.wikipedia.org/wiki/Intel_Management_Engine#Disabling_the_ME


Re: Writing emulators [Was: Re: VCF PNW 2018: Pictures!]

2018-02-20 Thread Sean Conner via cctalk
It was thus said that the Great Eric Christopherson via cctalk once stated:
> On Tue, Feb 20, 2018 at 5:30 PM, dwight via cctalk 
> wrote:
> 
> > In order to connect to the outside world, you need a way to queue event
> > based on cycle counts, execution of particular address or particular
> > instructions. This allows you to connect to the outside world. Other than
> > that it is just looking up instructions in an instruction table.
> >
> > Dwight
> >
> 
> What I've always wondered about was how the heck cycle-accurate emulation
> is done. In the past I've always felt overwhelmed looking in the sources of
> emulators like that to see how they do it, but maybe it's time I tried
> again.

  It depends upon how cycle accurate you want.  My own MC6809 emulator [1]
keeps track of cycles on a per-instruction basis, so it's easy to figure out
how many cycles have passed.  Hardware emulation can be done between
instructions by updating per the number of cycles passed (if required).  I
don't have the code for the MC6840 (a timer chip) in my MC6809 emulator
repository (it's still somewhat under construction) but I do update the
timer based upon the elapsed cycle count of the previous instruction [2].

  -spc (Code available upon request if you want to look at it)

[1] https://github.com/spc476/mc6809

[2] It's not emulating any existing machine.  Rather, I'm emulating a
system with a few serial ports (MC6850) and a few timer chips
(MC6840).  I have plans on adding a few floppy controllers (MC6843)
and a DMA chip (6844).


Re: Intel 8085 - interview?

2018-02-09 Thread Sean Conner via cctalk
It was thus said that the Great allison via cctalk once stated:
> 
> The industry was loaded with that the 6502 series also had that going on
> as well as the 6809 and others.

  Do you have any information about undocumented opcodes for the 6809?  

  -spc


Re: Computing from 1976

2017-12-30 Thread Sean Conner via cctalk
It was thus said that the Great Fred Cisin via cctalk once stated:
> On Sat, 30 Dec 2017, Murray McCullough via cctalk wrote:
> >I was perusing my old computer magazine collection the other day and
> >came across an article entitled: “Fast-Growing new hobby, Real
> >Computers you assemble yourself”, Dec. 1976. It was about MITS,
> >Sphere, IMSAI and SWT. 4K memory was $500. Yikes! Even more here in
> >Canada. Now this is true Classic Computing. Have a Happy New Year
> >everyone. May the computing gods shine down on us all in 2018.
> >Happy computing.  Murray  :)
> 
> OK, a little arithmetic exercise for you.
> (a 16C is nice for this, but hardly necessary)

  Sounds like fun.

> "Moore's Law", which was a prediction, not a "LAW", has often been 
> mis-stated as predicting a doubling of speed/capacity every 18 months.
> 
> 1) Figure out how many 18 month invtervals since then, and what 4k 
> "should' have morphed into by now.

  1) 28 doublings since 1975.  

(2017-1975) * 12

18

  4K should (had we truly doubed everything every 18 months) now be 1T
  (terrabyte):

2^12= 4K
2^(12+28)
2^40~ 1T

> 2) What did Gordon Moore actually say in 1965?

  That the number of transistors in an integrated circuit double every 18
  months.

> 3) How much is $500 of 1976 money worth now?

  It depends upon how you calculate it.  I'm using this page [1] for the
  calculation, and I get:

Current data is only available till 2016. In 2016, the relative
price worth of $500.00 from 1976 is:

$2,110.00 using the Consumer Price Index
$1,680.00 using the GDP deflator
$2,400.00 using the value of consumer bundle
$2,000.00 using the unskilled wage
$2,450.00 using the Production Worker Compensation
$3,340.00 using the nominal GDP per capita
$4,960.00 using the relative share of GDP

> 4) Consider how long it took to use a text editor to make a grocery 
> shopping list in 1976.  How long does it take today?

  I would think the same amount of time.  Typing is typing.

> Does having the grocery list consist of pictures instead of words, with 
> audio commentary, and maybe Smell-O-Vision (coming soon), improve the 
> quality of life?   

  For me, not really.

> How much does it help to be able to contact your 
> refrigeratior and query its knowledge of its contents?

  It could be helpful, but with the current state of IoT, I would not want
  to have that ability.

> (Keep in mind, that although hardware expanded exponentially, according to 
> Moore's Law, Software follows a corollary of Boyle's Law, and expands to 
> fill the available space and use all of the available resources - how much 
> can "modern" software do in 4K?, and how much is needed to boot the 
> computer and run a "modern" text editor?)

  EMACS is lean and mean compared to some of the "text editors" coming out
  today, based upon Javascript frameworks.  It's scary.

> 5) What percentage of computer users still build from kits, or from 
> scratch?

  I would say significantly less than 1%.  Say, 5% of 1%?  That's probably
  in the right ballpark.

> 6) What has replaced magazines for keeping in touch with the current 
> state of computers?

  The world wide web, although I do miss the Byte magazine of the 70s and
  80s.  Not so much the 90s.

  -spc (Yeah, I realize these were probably rhetorical in nature ... )

[1] http://www.measuringworth.com/uscompare/


Re: Pine (was: Re: cctalk Digest, Vol 17, Issue 20)

2017-10-22 Thread Sean Conner via cctalk
It was thus said that the Great Fred Cisin via cctalk once stated:
> >>I'm considering doing something that actually
> >>downloads my Gmail content locally and keeps it
> >>in sync periodically, but I haven't really
> >>looked at what's necessary for that.
> 
> On Sun, 22 Oct 2017, Angel M Alganza via cctalk wrote:
> >Have a look at mbsync/isync if you still haven't
> >done anything about it on those two years.  LOL
> >It does exactly what you wanted.
> >Cheers,
> >�ngel
>   ^
> example
> 
> A minor problem - A lot of mail that I receive won't display pro[perly on 
> PINE (such as the first letter of your name in your signature!
> I end up forwarding some mail FROM PINE, TO GMail to be able to read it!

  I have:

LANG=en_US.UTF-8
LC_COLLATE=C

as part of my environment, and I'm using a font that supports UTF-8.  Then
again, I'm using mutt, which supports locales and so it's only the really
malformed emails that end up garbled on my end.

  Note---UTF-8 is now 25 years old, so it should be fine for this list 8-P

  -spc



Re: If C is so evil why is it so successful?

2017-04-12 Thread Sean Conner via cctalk
It was thus said that the Great Alfred M. Szmidt once stated:
>It was thus said that the Great Noel Chiappa via cctalk once stated:
>> > From: Alfred M. Szmidt
>> 
>> > No even the following program:
>> >   int main (void) { return 0; }
>> > is guaranteed to work
>> 
>> I'm missing something: why not?
> 
>  Yeah, I'm having a hard time with that too.  I mean, pedantically, it
>should be:
> 
> 
>  #include 
>  int main(void) { return EXIT_SUCCESS; }
> 
> Pedantically, it does not matter -- a return from main is equivalent
> to an exit(), and exit(0) is sensibly defined, and EXIT_SUCCESS can
> also be different from 0 (even though I don't think such a platform
> exists).
> 
> Similiarly for EXIT_FAILURE ...

  There's this
(http://stackoverflow.com/questions/8867871/should-i-return-exit-success-or-0-from-main/8868139#8868139):

Somebody asked about OpenVMS. I haven't used it in a long time, but
as I recall odd status values generally denote success while even
values denote failure. The C implementation maps 0 to 1, so that
return 0; indicates successful termination. Other values are passed
unchanged, so return 1; also indicates successful termination.
EXIT_FAILURE would have a non-zero even value.

  And certainly VMS is on topic for this list.

  -spc (So ... pedantically speaking, who's correct?)



Re: If C is so evil why is it so successful?

2017-04-12 Thread Sean Conner via cctalk
It was thus said that the Great Alfred M. Szmidt via cctalk once stated:
> 
>> From: Alfred M. Szmidt
> 
>> No even the following program:
>>   int main (void) { return 0; }
>> is guaranteed to work
> 
>I'm missing something: why not?
> 
> It boils down to pedantism.  The encoding of the above is ASCII, and
> the encoding type of a C program is implementation defined.  

  Name *ONE* computer langauge where this *ISN'T* the case.  Until then,
I'll consider this a completely bogus claim.  Meanwhile, is *this* better?

??=include 
int main(void)
??<
  return EXIT_SUCCESS;
??>

So that it might be possible to convert this obviously ASCII rendition of a
C progran into EBCDIC?

> The other
> thing is that the abstract machine defined in C can be utterly bogus,
> i.e. not capable of executing anything due to various implementation
> specified environment limitations.

  Citation required.  Plus a real-world example.  Because otherwise I think
you're skirting very close to Troll Territory here ... 

> Ofcourse, this is all academic ... and I don't know any such idiotic
> implementation.

  Or an annoying level of pedanticism here ...

  -spc (Seriously, citation required ... )


Re: If C is so evil why is it so successful?

2017-04-12 Thread Sean Conner via cctalk
It was thus said that the Great Noel Chiappa via cctalk once stated:
> > From: Alfred M. Szmidt
> 
> > No even the following program:
> >   int main (void) { return 0; }
> > is guaranteed to work
> 
> I'm missing something: why not?

  Yeah, I'm having a hard time with that too.  I mean, pedantically, it
should be:

#include 
int main(void) { return EXIT_SUCCESS; }

where EXIT_SUCCESS is 0 on every plaform except for some obscure system no
one has heard of but managed to influence the C committee back in the late
80s.

> PS: There probably is something to the sports car analogy, but I'm not going
> to take a position on that one! :-) Interesting side-question though: is
> assembler more or less like a sports car than C? :-)

  One thing for sure---assembly langauge (for a given architecture) is
probably better defined (less undefined/underspecified behavior) than C.

  -spc



Re: C (was: The iAPX 432 and block languages)

2017-04-11 Thread Sean Conner via cctalk
It was thus said that the Great Jecel Assumpcao Jr. via cctalk once stated:
> Sean Conner wrote two great posts on Mon, 10 Apr 2017 21:43:29 -0400
> 
> These are all very good points. I agree I was exagerating by saying the
> iAPX432 and 8086 couldn't run C. After all, the language was born on the
> PDP-11 and that was limited to either 64KB or 128KB. So any C programs
> for that machine could be trivially recompiled to run on either Intel
> processor. But I certainly wouldn't want to port the C version of Spice
> to DOS, for example (I was given the job of porting the Fortran version
> of Spice from the PDP-11 to the Burroughs B6900 and can tell you that
> tales of Fortran's compatiblity are greatly exagerated, but that is
> another story).

  I can relate.  I have the code to Viola [1] and it no longer compiles
cleanly [2].  I have cleaned up the code enough to get it to produce an
executable, but man ... the code ... it *barely* runs on a 32-bit system and
immeidately crashes on a 64-bit system, mainly due to the deeply baked in
assumption that 

sizeof(int) == sizeof(long) == sizeof(char *) == sizeof(void *)

which is not always the case (even C says as much).  But it was written in a
time of flux, just after C was standardized and not everyone had an ANSI-C
compiler.

> The reason I used [bp-2] instead of [bp] in my second example is that I
> supposed the latter was for the dynamic links (pointer to who called us)
> so I needed the static link (pointer to who defined us) to be somewhere
> else. I did not bother trying to remember how the ENTER and LEAVE
> instructions work so my examples probably are not compatible with them:
> 
> https://pdos.csail.mit.edu/6.828/2012/readings/i386/ENTER.htm

  Yeah, I recently wrote code that used ENTER (works on both 32-bit and
64-bit x86 CPUs) just to figure out how it works.  I never found the
description clear and NONE of the examples actually used it for nested stack
frames (sigh).

  -spc

[1] http://www.viola.org/

[2] Conflicting types for malloc() and fprintf(), and use of an
obsolete header.


Re: The iAPX 432 and block languages (was Re: RTX-2000 processor PC/AT add-in card (any takers?))

2017-04-10 Thread Sean Conner via cctalk
It was thus said that the Great Jecel Assumpcao Jr. via cctalk once stated:
> Sean Conner via cctalk wrote on Mon, 10 Apr 2017 17:39:57 -0400
> >   What about C made it difficult for the 432 to run?  
> > 
> >   -spc (Curious here, as some aspects of the 432 made their way to the 286
> > and we all know what happened to that architecture ... )
> 
> C expects memory addresses to look like integers and for it to be easy
> to convert between the two. If your architecture uses a pair of numbers
> or an even more complicated scheme then you won't be able to have a
> proper C but only one or more less than satisfactory approximations.

  Just because a ton of C code was written with that assumption doesn't make
it actually true.  A lot of C code assumes a byte-addressable, two's
compliment architecture but C (technically Standard C) doesn't require
either and goes out of its way to warn programmers *not* to make such
assumptions.

  The C Standard is very careful to note what is and isn't allowed with
respect to memory and much of what is done is technically illegal and
anything can happen.  

> The iAPX432 and 286 used logical segments. So there is no sequence of
> increment or decrement operations that will get you from a byte in one
> segment to a byte in another segment. For the 8086 that is sometimes
> true but can be false if the "segments" (they should really be called
> relocation registers instead) overlap.

  Given:

p1 = malloc(10);
p2 = malloc(65536);

  There is no legal way to increment *or* decrement one to get to the other. 
It's not even guarenteed that p2 > p1.

> Another feature of C is that it doesn't take types too seriously when
> dealing with pointers. This means that a pointer to an integer array and
> a pointer to a function can be mixed up in some ways. 

  This is an issue, but mostly with K C (which had even less type checking
than ANSI C).  These days a compiler will warn if you try to pass a function
even with *no* cranking of the warning levels.

  Yes, C has issues, but please try not to make ones up for modern C.

  But if the point was, back in the day (1982) that this *was* an issue,
then yes, I would agree (to a point).  But I would bet that had the 432 been
successful, a C compiler would have been produced for it.

> If an application
> has been written like that then the best way to run it on an
> architectures like these Intel ones is to set all segments to the same
> memory region and never change them during execution. This is sometimes
> called the "tiny memory model".
> 
> https://en.wikipedia.org/wiki/Intel_Memory_Model
> 
> Most applications keep function pointers separate from other kinds of
> pointers and in this case you can set the code segment to a different
> area than the data and stack for a total of 128KB of memory (compared to
> just 64KB for the tiny memory model).
> 
> The table in the page I indicated shows options that can use even more
> memory, but that requires non standard C stuff like "far pointers" and I
> don't consider the result to be actually C since you can't move
> programer to and from machines like the VAX or 68000 without rewriting
> them.

  "Far" pointers exist for MS-DOS to support mixed memory-model programming,
where library A wants object larger than 64K while library B doesn't care
either way.  Yes it's a mess but that's pragmatism for you.

  But there's still code out there with such remnents, like zlib.  For
example:

ZEXTERN int ZEXPORT inflateBackInit OF((z_stream FAR *strm, int windowBits,
unsigned char FAR *window));

ZEXTERN, XEXPORT, OF and FAR exist to support different C compilers over the
ages.  And of those, XEXTERN and XEXPORT are for Windows, FAR for MS-DOS (see a
pattern here?) and OF for pre-ANSI C compilers.

  -spc



The iAPX 432 and block languages (was Re: RTX-2000 processor PC/AT add-in card (any takers?))

2017-04-10 Thread Sean Conner via cctalk
It was thus said that the Great Eric Smith via cctalk once stated:
> 
> The Intel iAPX 432 was also designed to explicitly support block-structured
> languages. The main language Intel pushed was Ada, but there was no
> technical reason it couldn't have supported Algol, Pascal, Modula, Euclid,
> Mesa, etc. just as well. (Or just as poorly, depending on your point of
> view.)
> 
> The iAPX 432 could not have supported standard C, though, except in the
> sense that since the 432 GDP was Turing-complete, code running on it could
> provide an emulated environment suitable for standard C.

  What about C made it difficult for the 432 to run?  

  -spc (Curious here, as some aspects of the 432 made their way to the 286
and we all know what happened to that architecture ... )


Floating point routines for the 6809

2017-03-27 Thread Sean Conner via cctalk

  Some time ago I came across the MC6839 ROM which contains floating point
routines for the 6809.  The documentation that came with it stated:

Written for Motorola by Joel Boney, 1980
Released into the public domain by Motorola in 1988
Docs and apps for Tandy Color Computer by Rich Kottke, 1989

  What I haven't been able to find is the actual *source code* to the
module.  Is it available anywhere?  I've been playing around the the MC6839
on an emulator but having the source would clear up some issues I've been
having with the code.

  -spc



Re: I hate the new mail system

2017-03-07 Thread Sean Conner via cctalk
It was thus said that the Great Fred Cisin via cctalk once stated:
> On Tue, 7 Mar 2017, Eric Christopherson via cctalk wrote:
> >What makes it so that other mailing lists don't unsubscribe people when
> >bounces occur?
> 
> This list displays (not "full headers"):
> Date: Tue, 7 Mar 2017 20:23:42 -0600
> From: Eric Christopherson via cctalk 
> Reply-To: Eric Christopherson ,
> "General Discussion: On-Topic and Off-Topic Posts" 
> 
> To: "General Discussion: On-Topic and Off-Topic Posts" 
> 
> Subject: Re: I hate the new mail system
> 
> 
> Some yahoos would do it as:
> Date: Tue, 7 Mar 2017 20:23:42 -0600
> From: "Eric Christopherson echristopher...@gmail.com [cctalk]"
>
> To: "General Discussion: On-Topic and Off-Topic Posts" 
> 
> Subject: Re: I hate the new mail system
> 
> 
> Notice, that they also munge the From:, but they include the author's 
> email address buried within the munged From:.
> 
> It is not the "correct" From: and Reply-to:,
> but, apparently some "modern" systems will not tolerate it done 
> "correctly".
> Given that it is NOT going to be done "correctly", which among us are 
> capable of successfully working around it?
> 
> This discussion is a little like a pedantic grammar argument.

  Well, RFC-5322 allows a mailbox-list in the From: header, and an
address-list for the Reply-To:, I don't think this is illegal.  And I just
noticed this RFC:

6854 Update to Internet Message Format to Allow Group Syntax in the
 "From:" and "Sender:" Header Fields. B. Leiba. March 2013. (Format:
 TXT=20190 bytes) (Updates RFC5322) (Status: PROPOSED STANDARD) (DOI:
 10.17487/RFC6854)

  Given that, I think adding to the Reply-To: header is kosher now.

  -spc (Hmm ... looks like I have to update my email parser ... )



Re: I hate the new mail system

2017-02-28 Thread Sean Conner via cctalk
It was thus said that the Great Chuck Guzis via cctalk once stated:
> On 02/28/2017 03:40 PM, Paul Berger wrote:
> > Well I am using Thunderbird 45.7.1 and I see this "Chuck Guzis via 
> > cctalk " as "From" in your message.
> > 
> 
> Hmmm, this is very puzzling.  Your message does indeed show up as being
> from "Paul Berger", by the message you replied to shows up as being from
> "CCtalk"
> 
> Is someone tweaking the system as we speak?

  Here are the critical headers from your (Chunk) email:

List-Post: 
From: Chuck Guzis via cctalk 
Reply-To: Chuck Guzis ,
 "General Discussion: On-Topic and Off-Topic Posts"
 
Errors-To: cctalk-boun...@classiccmp.org
Sender: "cctalk" 

  The from address is cct...@classiccomp.org.  The reply-to addresses
contains your (Chuck) email address (blanked out) in addition to the cctalk
address.  An email will use the reply-to address, but I could see an email
client using the list-post address when replying (since that informs the
email client that this email from a mailing list).  

  So here we have From: munging (that's a twist!) and Reply-To: munging but
by adding an address instead of outright replacement (intresting solution to
munging [1][2]).  So when you (not Chuck) hit reply, it goes to your (Chuck)
email address and cctalk.

  -spc (Who runs his own email server ... )

[1] "Reply-To" Munging Considered Harmful
http://www.unicom.com/pw/reply-to-harmful.html

[2] Reply-To Munging Considered Useful
http://marc.merlins.org/netrants/reply-to-useful.html





Re: Cassette Interface Assistance

2017-02-28 Thread Sean Conner via cctalk
It was thus said that the Great Tony Duell via cctalk once stated:
> On Tue, Feb 28, 2017 at 4:21 PM, allison via cctalk
>  wrote:
> 
> 
> >
> > The tape recorders used had no agc.  They were simple portables and used the
> > mic or line input and headphone output.
> 
> I have to disagree with you there. I have just looked at the documentation 
> for a
> couple of the Radio Shack tape recorders commonly used with TRS80s, and
> both have automatic gain control on recording.

  I recall using a normal casette tape recorder with my Coco back in the
day.  I don't recall a gain control, but I do recall setting the volumn
control way up.  Never had a problem reading back tapes.

  -spc



Re: [cctalk-requ...@classiccmp.org: confirm 38290c8a992491eda604beff5a06ff20cd7e85f5]

2017-01-31 Thread Sean Conner
It was thus said that the Great Kyle Owen once stated:
> On Tue, Jan 31, 2017 at 3:18 PM, Paul Koning  wrote:
> 
> >
> > I'm on comcast.net and I get these too.  Once a week or so on average.
> > The puzzle is that cctalk is the ONLY list that does this.  I subscribe to
> > a whole pile of them, and as far as I know they all complain about bounces,
> > but none of the others are actually getting bounces.
> >
> 
> I also get them, and this is also the only Mailman list that issues them
> that I'm subscribed to. Quite puzzling for sure.
> 
> In case there is a time correlation to this issue, these are the dates I've
> received them:
> 21 Oct. 2016
> 7 Nov. 2016
> 23 Nov. 2016
> 29 Nov. 2016
> 30 Dec. 2016
> 10 Jan. 2017
> 14 Jan. 2017

  This mailing list is unique in that it's actually two, cctalk and cctech. 
Messages sent to cctech are also copied to cctalk.  So some messages get
"crossposted" to both lists (with the same Message-ID).  Could that might
have something to do with this?

  -spc





Re: Unknown 8085 opcodes

2017-01-12 Thread Sean Conner
It was thus said that the Great Fred Cisin once stated:
> >>jsr puts
> >>fcc 'Hello, world!',13,0
> >>clra
> or the classic:
>JMP START1
>DATA2: DB . . .
>   DB . . .
>START1: MOV DX, OFFSET DATA2 
> Which was heavily used because
>MOV DX, OFFSET DATA3
>. . .
>DATA3: DB . . .
> would pose "forward reference" or "undefined symbol" problems for some 
> assemblers.
> 
> Even for manual assembly, or 'A' mode of DEBUG.COM, it was handy to 
> already know the address of the data before you wrote the steps to access 
> it.
> 
> On Thu, 12 Jan 2017, Mouse wrote:
> >Mine can't do that automatically, but it can with a little human
> >assist; the human would need to tell it that the memory after the jsr
> >is a NUL-terminated string, but that's all it would need to be told.
> 
> Not all strings are null-terminated.  In CP/M, and MS-DOS INT21h Fn9, the 
> terminating character is '$' !
> "If you are ever choosing a termination marker, choose something that 
> could NEVER occur in normal data!"
> Also, strings may, instead of a terminating character, be specified with a 
> length, or with a start and end address.

  I've seen the high bit set on the last character, again mostly in the
8-bit world.

  -spc



Re: Unknown 8085 opcodes

2017-01-12 Thread Sean Conner
It was thus said that the Great Chuck Guzis once stated:
> On 01/12/2017 07:35 AM, Mouse wrote:
> 
> >> Does your disassembler do flow analysis?
> > 
> > I doubt it, because none of the meanings I know for the term are 
> > anything my disassembler does.
> 
> A disassembler that can do flow analysis is a breath of fresh air when
> working with larger binaries.  Essentially, it looks at the code and
> makes some decisions about its content.
> 
> Thus, a target of an already-disassembled jump must also be code, not
> data, for example, so it's possible to disassemble large sections of
> code automatically.  Sections not referenced as code or data are held as
> "unknown" code until some guidance from the user is provided.

  But are there disassemblers that can handle somehing like:

jsr puts
fcc 'Hello, world!',13,0
clra
...

putspulsx
puts1   lda ,x+
beq puts9
jsr putchar
bra puts1
puts9   pshsx
rts

  I recall that being a somewhat common idiom in 8-bit code of the 80s.

  -spc



Re: 6502 code

2016-12-13 Thread Sean Conner
It was thus said that the Great drlegendre . once stated:
> @Sean
> 
> I was wondering the same, but perhaps he needs physical hardware for some
> specific purposes, like timing and so forth?

  The 6502 (as well as many of the other 8-bit CPUs of that era) are
deterministic to the point where published cycles counts for instructions
are an accurate measure of speed.  All you need is a cycle count to know how
fast a sequence of code is running.

  -spc



Re: 6502 code

2016-12-12 Thread Sean Conner
It was thus said that the Great dwight once stated:
> Hi
> 
>  There has been so much PDP and other stuff lately I kind of feel out of place
> asking about 6502 stuff.
> 
> Anyway, I've mentioned on the 6502.org that QuickSort is not always the 
> fastest
> sort. So I wrote a 6502 assembly sort but don't have a machine big enough to 
> test it
> on. I've only got my KIM-1 just working.
> 
> I was hoping someone would like to help me out, possible a Commodore64,
> maybe even a PET or Apple II.
> 
> It needs about 24 page zero bytes and about 5K of RAM.
> 
> It sorts 1K of 16bit integers.
> 
> Anyway, if someone would like to help, let me know. I've made several passes
> through the code and believe it to be close to bug free but know I'm bound
> to have a couple left.
> 
> See it as a challenge!

  If you have a modern system, you could always download a 6502 emulator and
test it on that.  Such a tactic wasn't even unheard of in the day---if I
recall correctly, that's how Gates & Co. tested their first BASIC---on an
emulated 8080.

  -spc



Re: Archived viruses, was Re: Reasonable price for a complete SOL-20 system?

2016-10-24 Thread Sean Conner
It was thus said that the Great et...@757.org once stated:
> 
> Early Macs definitely had viruses, a few that I got from thrift stores
> still have the viruses on them. I don't think there is any memory
> protection at all. Software selection for MacOS was pretty crappy, and it
> was hard to get under the hood. So protecting yourself from them would be
> very difficult on the Mac platform. All the file fork BS, dev tools hard
> to get. Also, just like the iPhone pretty much everything was
> shareware/commercial, less community stuff than the PC. I feel bad for the
> people that grew up on MAcOS versus MS-DOS.

  Memory protection does not protect you from a virus.  It can protect other
running processes from being modified (if they belong to other users they
can't be infected at all; other processes owned by the user it's possible,
depending upon the system [1]) but that's it.

  -spc

[1] I would say "yes" in general---you do have to be able to debug your
own programs and thus, intercept and modify a running process (at
the very least, to set a break point).


Re: Archived viruses, was Re: Reasonable price for a complete SOL-20 system?

2016-10-24 Thread Sean Conner
It was thus said that the Great allison once stated:
> On 10/23/2016 09:15 PM, Mouse wrote:
> >> My favorite formatter was my S100 crate with CP/M, [it's] impossible
> >> to give a single user OS without background processing a virus.
> > I disagree.  I see nothing about "a single-user OS without background
> > processing" that would prevent a virus from infecting other programs,
> > even including the OS, when it's run, and potentially doing something
> > else as well.
> >
> > Perhaps you are using some meaning of "virus" other than "piece of
> > software that infects other software to propagate itself"?  That's the
> > only meaning that makes any sense to me.
> >
> Its highly unlikely as first it would have to install itself and do so
> without corrupting the OS. CP/M-80 is a machine monitor with a file system
> and lacking most of the usual read the disk and "do something" automation. 
> The only automation in CP/M is logging a drive which is reading the
> directory and mapping used blocks. So the initial load would have to be
> performed by the user.  Trogan maybe, social engineered for sure, virus
> no.  The key is you have to actually execute a file for action to happen. 
> In CP/M you can disk dump sectors and never execute them, formatting is
> even more benign, the disk is never read save for a post format verify.

  MS-DOS had CP/M at its heart, and it had its fair share of virii (viruses? 
What is the plural of a computer virus?).  The discussion Liam linked to
(https://groups.google.com/forum/#!topic/comp.os.cpm/V1-zYzA6Uzg) seems to
echo my own thoughts here---technically, it's possible, but not probable due
to the resource constraints (mainly memory) inherent in CP/M.  There is
nothing that also requires a virus to run a background process---it can
certainly modify the existing program to infect other programs, but again,
on CP/M because of the contrained resources (and lack of speed) such actions
might be noticed by the user.

  And in my experience [1] most viruses would infect executable programs and
it wasn't until Windows, when Microsoft went out of its way to find any form
of code in any file type and execute it, did viruses start infecting other
types of files (at first, I didn't believe reports of viruses spreading via
JPEGs, but yup, it was true.  Thanks, Microsoft!).

  -spc 

[1] Never got any in my day-to-day activities, but there was an outbreak
at the university I attended in the late 80s.  I managed to snag an
example and decompile it.  I have no idea what virus it was, but I
think I still have a copy in my floppy archives somewhere.


Re: Telnet was Re: G4 cube (was Re: 68K Macs with MacOS 7.5 still in production use...)

2016-09-13 Thread Sean Conner
It was thus said that the Great Christian Liendo once stated:
> Agree. It's quite easy to telnet to a port to see if you get a response.
> Do it a lot.

  The kids are using nc (netcat) these days.  It supports both TCP and UDP.  

  -spc



Re: Reproduction micros

2016-07-25 Thread Sean Conner
It was thus said that the Great Peter Corlett once stated:
> 
> Unsurprisingly, the x86 ISA is brain-damaged here, in that some instructions
> (e.g. inc") only affect some bits in EFLAGS, which causes a partial register
> stall. The recommended "fix" is to avoid such instructions.

  I'm not following this.  On the x86, the INC instruction modifies the
following flags: O, S, Z, A and P.  So okay, I need to avoid INC to prevent
a partial register stall, therefore, I need to use ADD.  Let me check ...
hmm ... ADD modifies the following: O, S, Z, A, P and C.  So now I need to
avoid ADD as well?  I suppose I could use LEA but then there goes my bignum
addition routine ... 

  -spc (Or am I missing something?)



Re: VMS stability back in the day (was Re: NuTek Mac comes)

2016-07-14 Thread Sean Conner
It was thus said that the Great Swift Griggs once stated:
> On Thu, 14 Jul 2016, Sean Conner wrote:
> > What I've read about VMS makes me think the networking was incredible. 
> 
> Big Fat Disclaimer: I know very little about VMS. I'm a UNIX zealot.
> 
> I work with a lot of VMS experts and being around them has taught me a lot 
> more about it than I ever thought to learn. I respect the OS a lot and I 
> agree with Mouse about parts of it still being object lessons to other 
> OSes. I don't see any point in "UNIX vs VMS" which I gather was a big 
> bruhaha back in the 1990s.
> 
> HOWEVER...
> 
> Personally, given the mess of MultiNet, TCP/IP Services, and TCPWare, I 
> wouldn't make that statement about networking *at all*. However, maybe you 
> are talking about DECnet. I don't know much about DECnet except that it's 
> very proprietary and it's got a bunch of "phases" (versions) that are 
> radically different. Some are super-simple and not even routable, and 
> others are almost as nasty as an OSI protocol stack.

  I never did much with the networking on VMS (being a student, all I really
did with the account was a few Pascal programs for Programming 101 and
printing really large text files since I didn't want to waste the my printer
paper).  All I really have to go on is what I've read about it, and it was
probably DECnet stuff (clustering, etc) that made the network invisible.

  Yes, there are other systems out that that may have similar functionality
(QNX is one I did work with, and loved it).

> > But having used VMS (as a student), the command line *sucked* (except 
> > for the help facility---that blows the Unix man command out of the 
> > water).
> 
> The DCL command line is very foreign to me. I've seen people rave about 
> how regular and predictable things are in DCL, and I've seen some evidence 
> of that. I've also seen some spot-on criticisms of DCL scripting vis-a-vis 
> shell scripting and that's also accurate.

  My complaint was that for simple things (like changing a directory) it was
very verbose compared to Unix (or even MS-DOS).

  But I absolutely *love* the assembly language of the VAX.  It's a
wonderful instruction set.

  -spc (Not that I did much VAX assembly ... )





Re: NuTek Mac comes

2016-07-14 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> >> All the now-nostalgicized-over '80s OSes were pretty horribly
> >> unstable: [...]
> 
> Personally - I went through my larval phase under it - I'd cite VMS as
> a counterexample.  Even today I think a lot of OSes would do well to
> learn from it.  (Not that I think it's perfect, of course.  But I do
> think it did some things better than most of what I see today.)

  What I've read about VMS makes me think the networking was incredible. But
having used VMS (as a student), the command line *sucked* (except for the
help facility---that blows the Unix man command out of the water).

  -spc



Re: NuTek Mac comes

2016-07-13 Thread Sean Conner
It was thus said that the Great Eric Christopherson once stated:
> On Wed, Jul 13, 2016 at 12:39 AM, Chris Hanson 
> wrote:
> 
> > QuickDraw was almost literally the first code running on the Mac once it
> > switched to 68K.
> >
> 
> Was there a pre-68K period in Mac development?

  Yes.  The project was originally managed by Jef Raskin and he started it
iwth the 6809.  Once Jobs took the project over, it switched to the 68000.

  -spc



Re: word processor history -- interesting article (Evan Koblentz)

2016-07-09 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> > I've come to the conclusion [1] that terminfo and curses aren't
> > needed any more.  If you target VT100 (or Xterm or any other
> > derivative) and directly write ANSI sequences, it'll just work.
> 
> (a) That is not my experience.

  I did acknowledge (but it was snipped in your reply---it's the missing
footnote).

> (b) To the extent that it's true, it works only if you stick to a very
> much least-common-denominator set of sequences.  VT-100s, VT-220s,
> VT-240s, xterms, kterms, etc, each support a slightly different set of
> sequences, with in some cases (eg, DCS) slightly different semantics
> for the same basic sequences.  Assume anything more than some very
> minimal set and you are likely to find it breaks somewhere.

  Again, it's been easily fifteen years since I last used a physical
terminal, and even then, back around 2000, I only knew one other person (in
person) that owned a physical terminal like I did.  

  Today?

  Any terms I use (and I think the most users *NOT ON THIS LIST*) use are
xterms or derivatives of xterm.  

  I've also checked the xterm use of DCS.  I *still* don't understand where
you would use those particular sequences.

  I've also come across plenty of libraries and modules (for various
langauges) that use raw ANSI sequences to color things when they
"technically" should be using the Termcap Sf and Sb capabilities---those
scuflaws!  Touting non-portable behavior like that!

> > It's a few lines of code to get the current TTY (on any modern Unix
> > system) into raw mode in order to read characters [2].
> 
> "Raw mode" has been ill-defined since sgtty.h gave way to termios.h.
> Raw mode usually means something like -icanon -isig -echo -opost, and
> for lots of purposes you don't need to go that far; -icanon with min=1
> time=0 is enough for anything that doesn't want to read
> usually-signal-generating characters as data.
> 
> > [2] It's six lines to get an open TTY into raw mode,
> 
> system("stty raw");
> 
> :-)
> 
> Let's see.
> 
> struct termios o, n; tcgetattr(fd,); n=o; cfmakeraw(); 
> tcsetattr(fd,TCSANOW,);

  If I found that in any code I had to maintain, I'd reject that line as the
unmaintainable mess that it is.  Personally, I use:

  struct termios old;
  struct termios raw;
  intfh;

  fh = open("/dev/tty",O_RDWR);
  tcgetattr(fh,);
  raw = old;
  cfmakeraw();
  raw.c_cc[VMIN]  = 1;
  raw.c_cc[VTIME] = 1;
  tcsetattr(fh,TCSANOW,);

(I didn't include variable declarations or obtaining the file handle to the
TTY device in my initial message).

  -spc (Fraktur?  Really?  Fraktur?  What company had enough blackmail
material to get Fraktur part of the ECMA-48 standard?)



Re: word processor history -- interesting article (Evan Koblentz)

2016-07-08 Thread Sean Conner
It was thus said that the Great Chuck Guzis once stated:
> 
> On occasion, I still use an editor that I wrote for CP/M and later
> ported to DOS.  11KB and it has lots of features that are peculiar to my
> preferences.  I'd thought about porting it to Linux, but currently, it's
> still in assembly and dealing with terminfo or curses is not something
> that I look forward to.  So I use Joe.

  I've come to the conclusion [1] that terminfo and curses aren't needed any
more.  If you target VT100 (or Xterm or any other derivative) and directly
write ANSI sequences, it'll just work.  It's a few lines of code to get the
current TTY (on any modern Unix system) into raw mode in order to read
characters [2].

  -spc (Of course, then you have to deal with escape sequences, which can
get messy ... )

[1] Bias most likely from my own usage.  Mileage may vary here on this
list where all sorts of odd-ball systems are still in use 8-P

[2] It's six lines to get an open TTY into raw mode, one line to restore
upon exit.  Add in a few more lines to handle SIGWINCH (window
resize).  *Much* easier than dealing with curses.



Re: word processor history -- interesting article (Evan Koblentz)

2016-07-08 Thread Sean Conner
It was thus said that the Great Paul Berger once stated:
> >
> The DOS editor I really like  was originally call PE and an enhanced 
> version "E" was shipped with later version of PC-DOS, there are also 
> some clones of the editor floating around as well.  I still use this 
> editor regularly because of its very flexible ways of selecting and 
> manipulating text.

  I used PE 1.0 for *years* as my editor, and only found two issues with it:

1) it only supported lines of 255 characters or less
2) it didn't handle files where lines didn't end with CRLF

That's it.  I was even able to edit files that exceeded the RAM of the
machine (I didn't do it often since it was sluggish but it could handle it).

  -spc (I just wish I could have found the source code to is, but alas ... )



Some questions about the MC6839

2016-06-27 Thread Sean Conner

  So Motorola apparently never produced the MC6839, a ROM containing
position independent 6809 code for implementing (as far as I can see) IEEE
754 Draft 8.  Motorola *did* however, release the resulting binary into
(from what I understand) the Public Domain [1] but I've yet to find the
actual source code, which would solve my current problem.

  I'm playing around with the code in an MC6809 emulator [2] and trying to
use it (getting my retro-software fix in as it were).  It works---not as
accurate as today's stuff, but close enough and it supports single and
double precision.  The current issue I have is with the FMOV opcode
(register entry) described as:


--
|Function|Opcode|  Register entry conditions   | Stack entry conditions

--
| FMOV   | $1A  | U =  precision parameter word| push arg
||  | Y -> argument| push precision param 
word
||  | D -> fpcb| push ptr to fpcb
||  | X -> result  | call FPO9
||  |  | pull result

--

For moves, U contains a parameter word describing the size of the
source and destination arguments.  The bits are as follows, where
the size is as defined in the fpcb control byte

Bits 0-2  : Destination size
Bits 3-7  : unused
Bits 8-10 : Source size
Bits 11-15: unused

  It's not clear if U should contain the actual parameter value, or a
pointer to the parameter value.  It just doesn't seem to work no matter how
I code it.  Anyone have any clue?

  -spc (I'm at a loss here ... )

[1] Available in the file fpo9.lzh here
https://ftplike.com/browser/os9archive.rtsi.com/OS9/OS9_6X09/PROG/

[2] I wrote one:  https://github.com/spc476/mc6809
Not much documentation I'm afraid.


Some questions about the MC6839

2016-06-27 Thread Sean Conner
(My original message to cctech has yet to appear.  I thought I might try the
cctalk list).

  While Motorola never shipped the MC6839 [1] the binary is available [2]
and I've been playing around with it [3].  While it's not producing the
exact same results as I get on a more modern machine, it appears to be
"close enough" for me to be happy with it.  But I am having one issue that I
can't figure out.

  The documentation for the FMOV operation says:

FMOVMove (or convert) arg1 -> arg2.  This function is useful for
changing precisions (e.g. single to double) with full
exception processing for possible overflow and underflow.

  Okay.  And to call it [4]:


--
|Function|Opcode|  Register entry conditions   | Stack entry conditions

--
| FMOV   | $1A  | U =  precision parameter word| push arg
||  | Y -> argument| push precision param 
word
||  | D -> fpcb| push ptr to fpcb
||  | X -> result  | call FPO9
||  |  | pull result

--

For moves, U contains a parameter word describing the size of the
source and destination arguments.  The bits are as follows, where
the size is as defined in the fpcb control byte

Bits 0-2  : Destination size
Bits 3-7  : unused
Bits 8-10 : Source size
Bits 11-15: unused

  And the size bits are defined as:

111 = reserved
110 = reserved
101 = reserved
100 = extended - round result to double
011 = extended - round result to single
010 = extended - no forced rounding
001 = double
000 = single

  It appears that to convert from single to double, I would set U to $0001,
but the results are *so* far out of whack it's not even funny.  I've tried
setting U to point to the value $0001 and that doesn't work. I've tried
shifting the bits (because in the FPCB they're the upper three bits) and
that doesn't work.  I've tried reversing the registers and that doesn't
work.  Does anyone have the actual source code [4]?  Or know what I might be
doing wrong?

  -spc

[1] A ROM with position independent 6809 object code that conforms (to
what I can find) with IEEE 754 Draft 8.

[2] Available in the file fpo9.lzh here
https://ftplike.com/browser/os9archive.rtsi.com/OS9/OS9_6X09/PROG/

[3] Using a 6809 emulator library I wrote: https://github.com/spc476/mc6809
Not much documentation I'm afraid.

[4] Register entry: ROM base address + $003D
Stack entry:ROM base address + $003F

[5] I'm lead to believe that Motorola release the code into the public
domain.



Re: thinking of the "ultimate" retro x86 PCs - what bits to seek/keep ?

2016-06-18 Thread Sean Conner
It was thus said that the Great Maciej W. Rozycki once stated:
> On Thu, 2 Jun 2016, Sean Caron wrote:
> 
> > Oh, man, that brings back memories. Trying to bang Linux onto a 386SX-16 
> > with
> > 4 Meg RAM and some puny little hard drive ... My first NAT box! It was 
> > pretty
> > excruciating to use, LOL. I bet the throughput could be figured in kpps... 
> > ;)
> 
>  I had an experimental early Linux Ethernet bridge installation on a 
> 386SX-16 PC with 2MB of RAM IIRC and 5 NE2000 clones (as many as there 
> were ISA slots left after filling in an HGC adapter for the console and a 
> multi-I/O adapter for the hard disk), driving a network of some 200 PCs.

  I managed to install Linux on a 486 based Laptop with 4M RAM and 120M
harddrive.  It was ... interesting.  Some of my notes at the time:

http://boston.conman.org/1999/12/13.4
http://boston.conman.org/1999/12/15.1
http://boston.conman.org/2000/01/06.2

  I don't recall if I had networking (of any kind) installed or not.  But I
suspect I still have that laptop in storage ...

  -spc



Re: Quadra 660AV what's with the "PowerPC" label?

2016-06-16 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> It's a modern init. Most of panic is just headless running around. No,
> it's not an old-fashioned simplistic Unix utility. Hey, newsflash,
> neither is GNOME, neither is KDE. Neither is much of modern Unix.

  I'm not a fan of systemd, but it's not because SysV init is better, but
because of the increasing scope of systemd.  

  Okay, fine, it's a init that tracks daemons and will restart them
automatically if they stop (or crash, or whatever).  That's nice.  I don't
have a problem with that.  It can parallelize the startup daemons.  Okay ...
I never had an issue with how long a system takes to come up as I don't
really shut any of the computers off (even my desktop boxes).  But hey,
okay.  

  But no more syslog (okay, I know that's not technically true, but syslog
becomes a 2nd class citizen here).  No, Lennart decided to use a binary-only
logging system that's mostly undocumented (or rather, it's documented in
code that is subject to change from version to version) and there's no need
to forward the logs to another system---use that 2nd class syslog for that
crap if you need it.

  But that's it.  systemd only requires journald to run.  Oh, let's use dbus
for IPC because ... well ... I have no idea what, exactly, dbus brings to
the game that any of the other IPC mechanisms that currently exist in Unix
fail to have, other than being a usermode program and yet another dependency
from what I understand was mostly used as an IPC mechanism for the desktop,
but now required for servers as well.

  Linus is *still* fighting the systemd guys because they want to force dbus
into the kernel.

  Oh, and because of the say systemd works, it takes over cgroup management. 
The Linux kernel provides mechanism, not policy, but now, we have systemd
forcing a cgroup policy on everything.  Okay, perhaps systemd is the first
program to actually *use* cgroups but if at some point in the future you
want to play around with it, well ... sorry.  systemd is in control there.

  Login management is now the domain of systemd.

  Oh, and don't forget the little dustup over the "debug" kernel command
line:

https://www.phoronix.com/scan.php?page=news_item=mty1mza

  Over time, syslogd is taking over more and more of the system.  And that
would be fine if it were just RedHat (and RedHat derived) distributions. 
But no, Lennart is, by sheer force of will, forcing *all* Linux
distributions to use systemd.  Hell, it's now trying to force systemd
specific behavior in applications:

https://github.com/tmux/tmux/issues/428

Never mind that said application can run on other Unix systems.

  Oh, and forget running GNome on any other system than Linux with systemd.

  THAT is my problem with systemd.  It's mandating a $#!?load of policy and
dependencies with largly undocumented APIs.

> If people wanted to keep to the simplicity of Unix and bring it into a
> more modern world, they had Plan 9. Plan 9 brought networking & the
> GUI into the Unix everything-is-a-file model.
> 
> But everyone ignored it, pretty much. Some wrinkles got copied later.
> 
> And Plan 9 went one better, and (mostly) eliminated that nasty old
> unsafe mess, C, and it eliminated native binaries and brought
> platform-neutral binaries to the game.

  Um ... what?  Plan 9 is written in C.  And they still use binaries, just
fat binaries (that is, the binary contains multiple code and data segments
for each supported architecture0).  This isn't just limited to Plan 9---Mac
did this as well.

> Andy Tanenbaum was right. Linux was obsolete in 1991. A new monolithic
> Unix back then? You're kidding. No. It's a rewrite of the same old
> 1960s design, with the same 3 decades' worth of crap on top.
> 
> Today, it's mainly an x86 OS for servers and an ARM OS for
> smartphones, with a few weirdos using it for workstations. So stop it
  ^^^

  Wow!  Nice insult there.  Care to add more?

  And for the record, I still use Linux as (one) of my desktops machines.

> with the crocodile tears about "it's all text" and so on. Move on.
> It's over. It was over before I left Uni and I'm an olde pharte now.

  And you want an older Mac ... why?  System 9 is dead.  Gone on.  Pining
for the fjords!  Move on, man!  Move on!

> > Yes. IRIX is dead as a doornail. Also, with the way it died, I'd give
> > about 1000:1 odds of any legal form of IRIX ever re-surfacing. However, I
> > noticed that the source is floating around several places. Maybe some
> > illegal/hobbyist/illicit stuff might eventually see the light, but I doubt
> > it. It seems to me even the forums on Nekochan are slowing down. I still
> > use it and love it, and I have no problems securing it for "real world"
> > stuff. However, it's nothing but a hobby, nowadays.
> 
> Was it really different enough? What did it do other Unices don't?

  It was SysV with kickass graphics hardware.  Suns were BSD (at the time,
prior to 

Re: Quadra 660AV what's with the "PowerPC" label?

2016-06-16 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> On 14 June 2016 at 01:56, Sean Conner <s...@conman.org> wrote:

> >   What do you feel is still missing from OS-X today?  About the only thing I
> > can think of is the unique file system, where each file had a data and a
> > resource fork.
> 
> * The clean, totally CLI-less nature of it. Atari ST GEM imitated this,
> but it had the DOS-like legacy baggage of drive letters etc.

  So did the Amiga and it didn't have the baggage of drive letters.

  Okay, so it had drive names.  Instead of

A:\path\to\file

it had:

DF0:path/to/file

But you also had logical drive names.  Give the drive the name "Fred" and
you could reference a file as:

Fred:path/to/file

A nice side effect is that if there was no disk with the name of "Fred"
installed, AmigaOS would pop up a dialog box asking for the user to insert
the disk named "Fred".  It wouldn't matter what physical drive you popped
the disk into, AmigaOS would find it.  And, if you copied the files off the
disk Fred to the harddrive, say:

DH1:applications/local/fred

you could do

assign Fred=DH1:applications/local/fred

and there you go.  I find that nicer than environment variables in that it's
invisible to applications---the OS handles it for you.  And while on Unix,
the shell will expand environment variables, individual applications (say,
an editor) vary on support for such expansions.

  Personally, I like CLIs, but I'm used to them from the start.  And for
some work flows, I find its faster and easier than a graphical system.  

> >> I wish the Star Trek project had come to some kind of fruition.
> >>
> >> https://en.wikipedia.org/wiki/Star_Trek_project
> >
> >   Reading that, it sounds like it would have been much like early
> > Windows---an application that would run on top of MS-DOS (or in this case,
> > DR-DOS).
> 
> My impression is that DR-DOS would have been a bootloader, little more.

  Then why even bother with DR-DOS then?  

  -spc 



Re: Quadra 660AV what's with the "PowerPC" label?

2016-06-13 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> >  System 9.x and before are
> > "something different" for me, a break from my mostly hardcore CLI
> > existence.
> 
> Yes, true. An OS I still miss, for all its instability and quirkiness.
> I'd love to see a modern FOSS recreation, at least of the concept and
> the style, even if it was binary-incompatible.

  What do you feel is still missing from OS-X today?  About the only thing I
can think of is the unique file system, where each file had a data and a
resource fork.  

> I wish the Star Trek project had come to some kind of fruition.
> 
> https://en.wikipedia.org/wiki/Star_Trek_project

  Reading that, it sounds like it would have been much like early
Windows---an application that would run on top of MS-DOS (or in this case,
DR-DOS).  

  -spc



Re: thinking of the "ultimate" retro x86 PCs - what bits to seek/keep ?

2016-06-03 Thread Sean Conner
It was thus said that the Great Swift Griggs once stated:
> On Thu, 2 Jun 2016, TeoZ wrote:
> > The ultimate gaming 486 would have an EISA+VLB motherboard.
> 
> Yes, I would agree on that. However, since I'm mostly interested in 
> running older Unix variants and DOS, games aren't at the top of my value 
> system. Don't get me wrong, I love games, and I'd surely have a few loaded 
> with DOS. However, I'm looking for something special. I have this foggy 
> memory of a small, white, NEC (or maybe it was NCR, or or or... crap. I 
> just can't remember) slimline desktop machine that was a 486 and had a 
> SCSI2 interface right on the mobo and had an external SCSI2 header, too. I 
> know it had two or three expansion slots, but I didn't get to pop open the 
> box to look at what kind of slots they were. I've been google image 
> searching for a while trying to find it again. I saw them while doing some 
> contract job back in the 90's. The green LED on a SCSI terminator caught 
> my eye (as well as the fact that I liked the case design).

  Sounds somewhat similar to the server (email and web, on the public
Internet co-located) I used up through 2004-5.  It was an NCR-3230 system
and was a 486.  I used a similar unit at home as a router (fun times getting
three network cards working on the thing).

http://www.flummux.org/images/tower/index.html

  -spc (I think I had that for five, six years.  And the few times I had to
reboot it was due to moving it, or someone tripping over the power
cord. It was a robust little box.)


Re: Keyboards and Mice (was Model M, NEC ProSpeed)

2016-06-01 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> On 1 June 2016 at 17:48, Chuck Guzis  wrote:
> > I'm keyboard-oriented--Ctrl-V, where I've copied using Ctrl-X or
> > Control-C .
> 
> The neat thing about middle-click-to-paste-selection is that it's *as
> well as* the clipboard C
> 
> So you can, for instance, highlight a URL, hit Ctrl-C to copy,
> highlight the page title, switch windows, middle-click to paste the
> title, then Ctrl-V to paste the URL too.
> 
> *All in a single operation.*
> 
> It's /very/ handy and worth learning if only for that reason alone.

  An interesting thing about Firefox on Linux [1] is that once you select
the text of a webpage, you have some interesting options:

TIMESTAMP
TARGETS
MULTIPLE
text/html
text/_moz_htmlcontext
text/_moz_htmlinfo
UTF8_STRING
COMPOUND_TEXT
TEXT
STRING
text/x-moz-url

  Those are the selection types available via X Windows [2].  "text/html"
returns the HTML of the selected portion of the page.  "text/x-moz-url" will
return the URL of the page.  "UTF8_STRING" will return the selected text as
UTF-8 encoded data.  

  Because of this, I wrote a command line program to query the current
selection (assumed to be a webpage) and obtain not only the selected
portion, but the URL and from there, request the page to extract the title
element [3].  All in a single command.

  -spc (The X Selection method is quite flexible)

[1] Or used to be; I haven't done this since I switched to using Mac
OS-X for web browsing.

[2] I won't go into depth of how this works right now---just be aware
that you can chose what type of data to obtain from the selection
owner.

[3] A typical operation when blogging about a web page.



Re: Keyboards and the Model M (was Re: NEC ProSpeed 386)

2016-05-31 Thread Sean Conner
It was thus said that the Great Swift Griggs once stated:
> On Tue, 31 May 2016, Peter Coghlan wrote:
> > > It might be interesting to poll the list to see who's still using an IBM
> > > Model M keyboard on their x86 box.  I am.
> > > Windows key?  What Windows key? ;)
> >
> > x86 box?  What x86 box? ;)
> 
> Hehe, I use my Model M mostly with SGI's that have PS/2 ports. So, I'm 
> right there with you. 

  I only use Model M keyboards.  I have one for my Linux box, one for my
Mac, and one for the office Mac.  I have about five more sitting in the
closet of the home office on standby, and I think I have a box of keyboards
in storage.

  The only Model M I'm upset over not getting [1] is the one with the APL
symbols on the keys.

  -spc

[1] I was at a Ham fest about a decade ago and my friend snapped up that
keyboard before I even saw it.  Gr ...


Re: strangest systems I've sent email from

2016-05-24 Thread Sean Conner
It was thus said that the Great Fred Cisin once stated:
> On 22 May 2016 at 04:52, Guy Sotomayor Jr  wrote:
> >Because the 808x was a 16-bit processor with 1MB physical addressing.  I
> >would argue that for the time 808x was brilliant in that most other 16-bit
> >micros only allowed for 64KB physical.
> 
> Whether 8088 was an "8 bit" or "16 bit" processor depends heavily on how 
> you define those.
> Or, you could phrase it, that the 8 bit processors at the time handled 
> 64KiB of RAM.  The 808x still could see only 64KiB at a time, but let you 
> place that 64kiB almost anywhere that you wanted in a total RAM space of 
> 1MiB, and let you set 4 "preset" locations (CS, DS, SS, ES).  There were 
> some instructions, such as MOV, that could sometimes operate with 2 of 
> those presets.
> Thus, they expanded a 64KiB RAM processor to 1MiB, with minimal internal 
> changes.

  To further explain this.  The 8086 (and 8088) was internally a 16-bit CPU. 
Since you could only address 64K, Intel used four "segment" registers to get
around this limit.  The four registers are CS, DS, ES and SS, and were the
upper 16 bits of the 20-bit address [1].  A physical address was calculated
as:

+++
|  16-bit segment |
+++

+++
+   | 16-bit offset   |
+++
=

   ++++
   | 20-bit physical addr |
   ++++

  Instructions were read from CS:IP (CS segment, IP register), most reads
and write to data sent to DS:offset, with some exceptions.  Pushes and pops
to the stack went to SS:SP and reads/writes with the BP register also used
the SS segment (SS:BP).  The string instructions used DS:SI as the source
address and ES:DI as the destination address.  And you could override the
default segment for a lot of instructions.  So:

mov ax,[bx] -- SRC address is DS:BX
mov es:[bx],ax  -- DEST address is ES:BX

  Technically, you could address as much as 256K without changing the
segment registers.

  I got used to this, but I still preferred programming on the 6809 (8-bit
CPU) and 68000.  *Much* nicer architectures.

  -spc (And the 80386 introduced paging to this mess ... )

[1] Starting with the 80286, in proected mode, they are treated
differently and no longer point to a physical address.  But that's
beyond the scope of *this* message.


Re: strangest systems I've sent email from

2016-05-24 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> On 22 May 2016 at 04:52, Guy Sotomayor Jr  wrote:
> > Because the 808x was a 16-bit processor with 1MB physical addressing.  I
> > would argue that for the time 808x was brilliant in that most other 16-bit
> > micros only allowed for 64KB physical.
> 
> Er, hang on. I'm not sure if my knowledge isn't good enough or if that's a 
> typo.
> 
> AFAIK most *8* bits only supported 64 kB physical. Most *16* bits
> (e.g. 68000, 65816, 80286, 80386SX) supported 16MB physical RAM.
> 
> Am I missing something here?

  It really depends on how you view a CPU, from a hardware or software
perspective.  From a software perspective, a 68000 was a 32-bit
architecture.  From a hardware perspective, the 68000 had a 16-bit bus and
24 physical address lines and I'm sure at the time (1979) that those
hardware limits were more due to costs and manufactoring ability (a 68-pin
chip was *huge* at the time) (furthermore, the 68008 was still ineternally a
32-bit architecture but only had an 8-bit external data bus---does this mean
it's an 8bit CPU?).  The 68020 (I'm not sure about the 68010) had a 32-bit
physical address bus.

  You are right in that most 8-bit CPUs supported only 16 bits for a
physical address but there were various methods to extend that [1] but
limited to 64k at a time.

> I always considered the 8088/8086 as a sort of hybrid 8/16-bit processor.

  Again, internally, the 8088 was a 16-bit archtecture but with an 8-bit
external data bus (and a 20-bit physical address space).  The 8086 had a
full 16-bit external data bus (and still 20-bit address space) and thus, was
a bit more expensive (not in CPU cost but more in external support with the
motherboard and memory bus).  The 80286 still had an external 16-bit bus but
had 24-physical address lines (16MB).

  The 8088/8086 could address 1MB of memory.  The reason for the 640k limit
was due to IBM's implementation of the PC, chopping off the upper 384K for
ROM and video memory.  MS-DOS could easily use more, but it had to be a
consecutive block.  Some PCs in the early days of the clones did allow as
much as 700-800K of RAM for MS-DOS, but they weren't 100% IBM PC compatible
(BIOS yeah, but not hardware wise) and thus, were dead ends at the time.

  -spc

[1] Bank switching was one---a special hardware register (either I/O
mapped or memory mapped, depends upon the CPU) to enable/disable
banks of memory.



Re: Front panel switches - what did they do?

2016-05-24 Thread Sean Conner
It was thus said that the Great Swift Griggs once stated:
> On Tue, 24 May 2016, william degnan wrote:
> > Here's a power point pres I did at VCF-E4, this will get you started. 
> > Using Altair 680b front panel in basic terms is covered a few slides in. 
> > http://vintagecomputer.net/vcf4/How_to_Session/
> 
> There are some nice clean photos in that presentation. So, it was binary 
> with some hexadecimal addressing. I like the slide entitled "How to test 
> Machine Language Using a Program Listing Using Toggle Switches". That's 
> pretty hard core. I'm surprised they didn't at least use component 
> displays with LEDs to show the values rather than reading it straight off 
> some blinkenlights. Maybe those weren't around yet or were too expensive.

  Work with binary, octal and/or hexadecimal enough, and you'll learn how to
sight read binary patterns.  The conversion is pretty simple:

  OCTAL HEX
- - -   0   - - - - 0   
- - *   1   - - - * 1
- * -   2   - - * - 2
- * *   3   - - * * 3
* - -   4   - * - - 4
* - *   5   - * - * 5
* * -   6   - * * - 6
* * *   7   - * * * 7
* - - - 8
* - - * 9
* - * - A (or 10)
* - * * B (   11)
* * - - C (   12)
* * - * D (   13)
* * * - E (   14)
* * * * F (   15)

  So a byte value of

* - - * - * * -

is hexadecimal 96.  On Octal, it wold be 226.

  -spc (I'll leave he decimal value to the reader)


Re: strangest systems I've sent email from

2016-05-21 Thread Sean Conner
It was thus said that the Great ben once stated:
> On 5/20/2016 2:58
> 
> >
> >[4]  Say, a C compiler an 8088.  How big is a pointer?  How big of an
> > object can you point to?  How much code is involved with "p++"?
> 
> How come INTEL thought that 64 KB segments ample? I guess they only used
> FLOATING point in the large time shared machines.

  The industry at the time was wanting larger CPUs than 8 bit.  Intel had an
existing 8-bit design, the 8080 and to fill demand, Intel had a few choices. 
It could break with any form of compatibility (object or source) and start
over with a clean slate [1].  Or they could keep some form of compatibility
and Intel went with (more or less) source compatibility.  You could
mechanically translate 8080 code into 8086 code with a high assurance it
would work, and thus customers of Intel could leverate the existing 8080
(and Z80) source base.

  And that's how you end up with a bizare segmented 16-bit architecture.

  -spc

[1] Motorola took this approach when making the 68000.  It's nothing at
all like the 6800.


Re: 1's comp

2016-05-21 Thread Sean Conner
It was thus said that the Great ben once stated:
> On 5/20/2016 7:19 PM, Sean Conner wrote:
> 
> >>
> >>Hehe, what is a long long? Yes, you are totally right. Still, I assert
> >>that C is still the defacto most portable language on Earth. What other
> >>language runs on as many OS's and CPUs ? None that I can think of.
> >
> >  A long long is at least 64-bits long.
> 
> Only if you get rid of char pointers you are portable.

  I don't understand this statement.

> I like 1's compilent because it handles shifting properly.

  Again, I'm not sure how this follows.  Right shifting signed quantities is
undefined because different CPUs handle it differently---some copy the sign
bit, some don't.  I don't see how being 1's complement fixes this.

  -spc


Re: strangest systems I've sent email from

2016-05-21 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> >>>   -spc (Wish the C standard committee had the balls to say "2's
> >>>   complement all the way, and a physical bit pattern of all 0s is a
> >>>   NULL pointer" ... )
> >> As far as I'm concerned, this is different only in degree from `Wish
> >> the C standard committee had the balls to say "Everything is x86".'.
> 
> > First off, can you supply a list of architectures that are NOT 2's
> > complement integer math that are still made and in active use today?
> 
> > Second, are there any architectures still commercially available and
> > used today where an all-zero bit pattern for an address *cannot* be
> > used as NULL?
> 
> What's the relevance?  You think the C spec should tie itself to the
> idiosyncracies of today's popular architectures?

  One more thing I forgot to mention:  Java integer ranges are 2's
complement, so it must assume 2's complement implementation.  I noticed that
Java is *also* available on the Unisys 2200, so either their implementation
of Java isn't quite kosher, or because the Unisys 2200 is emulated anyway,
they can "get by" with Java since the emulation of the Unisys is done on a
2's complement machine.

  -spc



Re: strangest systems I've sent email from

2016-05-21 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> >>>   -spc (Wish the C standard committee had the balls to say "2's
> >>>   complement all the way, and a physical bit pattern of all 0s is a
> >>>   NULL pointer" ... )
> >> As far as I'm concerned, this is different only in degree from `Wish
> >> the C standard committee had the balls to say "Everything is x86".'.
> 
> > First off, can you supply a list of architectures that are NOT 2's
> > complement integer math that are still made and in active use today?
> 
> > Second, are there any architectures still commercially available and
> > used today where an all-zero bit pattern for an address *cannot* be
> > used as NULL?
> 
> What's the relevance?  You think the C spec should tie itself to the
> idiosyncracies of today's popular architectures?

  I'd wager that most C code is written *assuming* 2's complement and 0 NULL
pointers (and byte addressable, but I didn't ask for that 8-).  That the C
Standard is trying to cover sign-magnitude, 1's complement and 2's
complement leads to the darker, scarier, dangerous corners of C programming.

  I've been reading some interesting things on this.  

"Note that removing non-2's-complement from the standard would
completely ruin my stock response to all "what do you think of this
bit-twiddling extravaganza?" questions, which is to quickly confirm
that they don't work for 1s' complement negative numbers. As such
I'm either firmly against it or firmly in favour, but I'm not sure
which."

...

"[W]rite a VM with minimal bytecode and that uses 1s' complement
and/or sign-magnitude. Implement a GCC or LLVM backend for it if
either of them has nominal support for that, or a complete C
implementation if not. That both answers the question ("yes, I do
now know of a non-2's-complement implementation") and gives an
opportunity to file considered defect reports if the standard does
have oversights. If any of the defects is critical then it's
ammunition to mandate 2's complement in the next standard."


http://stackoverflow.com/questions/12276957/are-there-any-non-twos-complement-implementations-of-c?lq=1

  Personally, I would *love* to see such a compiler (and would actually use
it, just to see how biased existing code is).  From reading this
comp.lang.c++.moderated thread:


https://groups.google.com/forum/?hl=en=en#!topic/comp.lang.c++.moderated/gzwbsrZhix4

I'm not even sure 

size_t foo = (size_t)-1;

is legal, or even does what I expect it to do (namely---set foo to the
largest size_t value possible (pre C99).

  Now, I realize this is the Classic Computers Mailing list, which include
support for all those wnoderful odd-ball architectures of the past, but
really, in my research, I've found three sign-mangnitude based computers:

IBM 7090
Burroughs series A
PB 250

(the IBM 1620 was signed-magnitude, but decimal based, which the C standard
doesn't support.  And from what I understand, most sign-magnitude based
machines were decimal in nature, not binary, so they need tno apply)

and a slightly longer list of 1's complement machines:

Unisys 1100/2200
PDP-1  
LINC-8
PDP-12  (2's complement, but also included LINC-8 opcodes!)
CDC 160A
CDC 6000
Electrological EL-X1 and EL-X8

I won't bother with listing 2's complement architectures because the list
would be too long and not at all inclusive of all systems (but please, feel
free to add to the list of binary sign-magnitude and 1's complement
systems).  Of the 1's complement listed, only the Unisys is still in active
use with a non-trivial number of systems but not primarily emulated.

  To me, I see 2's complement as having "won the war" so to speak.  It is
far from "idiosyncratic." And any exotic architecture of tomorrow won't be
covered in the C standard becuase the C standard only covers three integer
math implementations:

signed magnitude
1's complement
2's complement

If tinary or qubit computers become popular enough to support C, the C
standard would have to be changed anyway.

  The initial C standard, C89, was a codification of *existing* practice,
and I'm sure IBM pressed to have the C standard support non-2's complement
so they could check off the "Standard C box."  Yes, Unisys has a C compiler
for the Unisys 2200 system and one that is fairly recent (2013).  But I
could not find out if it supported C99, much less C89.  I couldn't tell.  

  And yes, you can get a C compiler for a 6502.  They exist.  But the ones
I've seen aren't ANSI C (personally, I think one would be hard pressed to
*get* an ANSI C compiler for a 6502; it's a poor match) and thus, again,
aren't affected by what I'd like.

  I'm not even alone in this.  Again, for your reading pleasure:

Proposal for a Friendly Dialect of C

Re: strangest systems I've sent email from

2016-05-20 Thread Sean Conner
It was thus said that the Great William Donzelli once stated:
> >   First off, can you supply a list of architectures that are NOT 2's
> > complement integer math that are still made and in active use today?  As far
> > as I can tell, there was only one signed-magnitude architecture ever
> > commercially available (and that in the early 60s) and only a few 1's
> > complement architectures from the 60s (possibly up to the early 70s) that
> > *might* still be in active use.
> 
> There are probably a couple hundred Unisys 2200 systems left in the
> world (no one really knows the true number). Of course, when the C
> standards were being drawn up, there were many more, with a small but
> significant share of the mainframe market.

  Oh my!  I'm reading the manual for the C compiler for the Unisys 2200 [1]
system and it's dated 2013!  And yes, it does appear to be a 36-bit non-byte
addressable system.  

  Wow!

  I am finding chapter 14 ("Strategies for Writing Efficient Code") amusing
("don't use pointers!" "don't use loops!")  I suppose this is 1's complement
but I see nothing about that in the manual, nor do I see any system limits
(like INT_MAX, CHAR_MAX, etc).  

  -spc (Color me surprised!)

[1] https://public.support.unisys.com/2200/docs/cp15.0/pdf/78310430-016.pdf


Re: C standards and committees (was Re: strangest systems I've sent email from)

2016-05-20 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> 
> > 3) It's slower.  Two reasons for this:
> 
> Even to the extent this is true, in most cases, "so what"?
> 
> Most executables are not performance-critical enough for dynamic-linker
> overhead to matter.  (For the few that are, or for the few cases where
> lots are, yes, static linking can help.)

  I keep telling myself that whenever I launch Firefox after a reboot ...

> > I use the uintXX_t types for interoperability---known file formats
> > and network protocols, and the plain (or known types like size_t)
> > otherwise.
> 
> uintXX_t does not help much with "known file formats and network
> protocols".  You have to either still serialize and deserialize
> manually - or blindly hope your compiler adds no padding bits (eg, that
> it lays out your structs exactly the way you hope it will).

  First off, the C standard mandates that the order of fields in a struct
cannot be reordered, so that just leaves padding and byte order to deal
with.  Now, it may sound cavalier of me, but of the three compilers I use at
work (gcc, clang, Solaris Sun Works thingy) I know how to get them to layout
the structs exactly as I need them (and it doesn't hurt that for the files
and protocols we deal with are generally properly aligned anyway for those
systems that can't deal with misaligned reads (generally everything *BUT*
the x86)) and that we keep everything in network byte order. [1]

  -spc 

[1] Sorry Rob Pike [2], but compilers aren't quite smart enough [3]
yet.

[2] https://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html

[3] https://news.ycombinator.com/item?id=3796432


Re: strangest systems I've sent email from

2016-05-20 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> >   -spc (Wish the C standard committee had the balls to say "2's
> >   complement all the way, and a physical bit pattern of all 0s is a
> >   NULL pointer" ... )
> 
> As far as I'm concerned, this is different only in degree from `Wish
> the C standard committee had the balls to say "Everything is x86".'.

  First off, can you supply a list of architectures that are NOT 2's
complement integer math that are still made and in active use today?  As far
as I can tell, there was only one signed-magnitude architecture ever
commercially available (and that in the early 60s) and only a few 1's
complement architectures from the 60s (possibly up to the early 70s) that
*might* still be in active use.

  Second, are there any architectures still commercially available and used
today where an all-zero bit pattern for an address *cannot* be used as NULL? 
Because it comes across as strange where:

char *p = (char *)0;

  is legal (that is, p will be assigned the NULL address for that platform
which may not be all-zero bits) whereas:

char *p;
memset(,0,sizeof(p));

isn't (p is not guarenteed to be a proper NULL pointer) [1].

  My wish covers yes, the x86, but also covers the 68k, MIPS, SPARC,
PowerPC and ARM.  Are there any others I missed?

  -spc (Also, the only portable exit codes are EXIT_SUCCESS and EXIT_FAILRE
[2][3])

[1] I know *why* this happens, but try to explain this to someone who
hasn't had *any* exposure to a non-byte, non-2's complement,
non-8-32-64bit system.

[2] From my understanding, it's DEC that mandated these instead of 0 and
1, because VMS used a different interpretation of exit codes than
everybody else ...

[3] I only bring this up because you seem to be assuming my position is
"all the world's on x86" when it's not (the world is "2's complement
byte oriented machines").  And because of this, I checked some of
your C code and I noticed you used 0 and 1 as exit codes, which,
pedantically speaking, isn't portable.

Yes, I'll admit this might be a low blow here ...


Re: C standards and committees (was Re: strangest systems I've sent email from)

2016-05-20 Thread Sean Conner
It was thus said that the Great Swift Griggs once stated:
> On Fri, 20 May 2016, Sean Conner wrote:
> > By the late 80s, C was available on many different systems and was not 
> > yet standardized.
> 
> There were lots of standards, but folks typically gravitated toward K or 
> ANSI at the time. Though I was a pre-teen, I was a coder at the time.  
> Those are pretty raw and primitive compared to C99 or C11, but still quite 
> helpful, for me at least. Most of the other "standards" were pretty much a 
> velvet glove around vendor-based "standards", IMHO.

  In 1988, C had yet to be standardized.

  In 1989, ANSI released the first C standard, commonly called ANSI C or
C89.  

  I stared C programming in 1990, so I started out with ANSI C pretty much
from the start.  I found I prefer ANSI-C over K (pre-ANSI C), because the
compiler can catch more errors.

> > The standards committee was convened in an attempt to make sense of all 
> > the various C implementations and bring some form of sanity to the 
> > market.
> 
> I'm pretty negative on committees, in general. However, ISO and ANSI 
> standards have worked pretty well, so I suppose they aren't totally 
> useless _all_ the time.
> 
> Remember OSI networking protocols? They had a big nasty committee for all 
> their efforts, and we can see how that worked out. We got the "OSI model" 
> (which basically just apes other models already well established at the 
> time). That's about it (oh sure, a few other things like X.500 inspired 
> protocols but I think X.500 is garbage *shrug* YMMV). Things like TPx 
> protocols never caught on. Some would say it was because the world is so 
> unenlightened it couldn't recognize the genius of the commisar^H^H^H 
> committee's collective creations. I have a somewhat different viewpoint.

  The difference between the two?  ANSI codified existing examples where as
ISO created a standard in a vacuum and expected people to write
implementaitons to the standard.

> > All those "undefined" and "implementation" bits of C?  Yeah, competing 
> > implementations.
> 
> Hehe, what is a long long? Yes, you are totally right. Still, I assert 
> that C is still the defacto most portable language on Earth. What other 
> language runs on as many OS's and CPUs ? None that I can think of.

  A long long is at least 64-bits long.

  And Lua can run on as many OSs and CPUs as C.  

> > And because of the bizarre systems C can potentially run on, pointer 
> > arithmetic is ... odd as well [4].
> 
> Yeah, it's kind of an extension of the same issue, too many undefined grey 
> areas. In practice, I don't run into these types of issues much. However, 
> to be fair, I typically code on only about 3 different platforms, and they 
> are pretty similar and "modern" (BSD, Linux, IRIX).

  Just be thankful you never had to program C in the 80s and early 90s:

http://www.digitalmars.com/ctg/ctgMemoryModel.html

  Oh, wait a second ...


http://eli.thegreenplace.net/2012/01/03/understanding-the-x64-code-models

> > It also doesn't help that bounds checking arrays is a manual process, 
> > but then again, it would be a manual process on most CPUs [5] anyway ...
> 
> I'm in the "please don't do squat for me that I don't ask for" camp. 

  What's wrong with the following code?

p = malloc(sizeof(somestruct) * count_of_items);

  Spot the bug yet?  

  Here's the answer:  it can overflow.  But that's okay, because sizeof()
returns an unsiged quantity, and count_of_items *should* be an unsigned
quantity (both size_t) and overflow on unsigned quantities *is* defined to
wrap (it's signed quantities that are undefined).  But that's *still* a
problem because if "sizeof(somestruvt) * count_of_items" exceeds the size of
a size_t, then the result is *smaller* than expected and you get a valid
pointer back, but to smaller pool of memory that expected.

  This may not be an issue on 64-bit systems (yet), but it can be on a
32-bit system.  Correct system code (in C99) would be:

if (count_of_items > (SIZE_MAX / sizeof(somestruct)))
  error();
p = malloc(sizeof(somestruct) * count_of_items);

  Oooh ... that reminds me ... I have some code to check ... 

> I know that horrifies and disgusts some folks who want GC and auto-bounds
> checking everywhere they can cram it in. Would SSA form aid with all kinds
> of fancy compiler optimizations, including some magic bounds checking? 
> Sure. However, perhaps because I'm typical of an ignorant C coder, I would
> expect the cost of any such feature would be unacceptable to some. 

  Don't discount GC though---it simplifies a lot of code


> Also,
> there are plenty of C variants or special compilers that can do such
&

Re: strangest systems I've sent email from

2016-05-20 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> On 29 April 2016 at 19:49, Mouse  wrote:
> > 
> > It's true that C is easy to use unsafely.  However, (a) it arose as an
> > OS implementation language, for which some level of unsafeness is
> > necessary, and (b) to paraphrase a famous remark about Unix, I suspect
> > it is not possible to eliminate the ability to do stupid things in C
> > without also eliminating the ability to do some clever things in C.
> 
> I think that the key thing is not to offer people alternatives that
> make it safer at the cost of removal of the clever stuff. It's to
> offer other clever stuff instead. C is famously unreadable, and yet
> most modern languages ape its syntax.

  By the late 80s, C was available on many different systems and was not yet
standardized.  The standards committee was convened in an attempt to make
sense of all the various C implementations and bring some form of sanity to
the market.  All those "undefined" and "implementation" bits of C?  Yeah,
competing implementations.

  For instance, why is signed integer arithmetic so underspecified?  So that
it could run on everything from a signed-magnitude machine [1] to a
pi-complement machine (with e-states per bit [2]).  Also, to give C
implementors a way to optimize code [3][6].

  And because of the bizarre systems C can potentially run on, pointer
arithmetic is ... odd as well [4].

  It also doesn't help that bounds checking arrays is a manual process, but
then again, it would be a manual process on most CPUs [5] anyway ... 

  -spc (Wish the C standard committee had the balls to say "2's complement
all the way, and a physical bit pattern of all 0s is a NULL pointer"
... )

[1] I think there was only one signed-magnitude CPU commercially
available, ever!  From the early 60s!  Seriously doubt C was ever
ported to that machine, but hey!  It could be!

[2] 2.71828 ... 

[3] Often to disasterous results.  An agressive C optimizer can optimize
the following right out:

if (x + 1 < x ) { ... }

Because "x+1" can *never* be less than "x" (signed overflow?  What's
that?)

[4] Say, a C compiler an 8088.  How big is a pointer?  How big of an
object can you point to?  How much code is involved with "p++"?

[5] Except for, say, the Intel 432.  Automatic bounds checking on that
one.  

[6] Trapping on signed overflow is still contentious today.  While some
systems can trap immediate on overflow (VAX, MIPS), the popular CPUs
today can't.  It's not to say they can't test for it, but that's the
problem---they have to test after each possible operation.  And not
all instructions that manipulate values affects the overflow bit
(it's not uncommon on the x86, for instance, to use an LEA
instruction (which does not affect flags) to multiple a value by
small constants (like 17 say) which is faster than a full multiple
and probably uses fewer registers and prevents clogging the
piplelines).


Re: strangest systems I've sent email from

2016-05-18 Thread Sean Conner
It was thus said that the Great Fred Cisin once stated:
> On Wed, 18 May 2016, John Willis wrote:
> >Let's not forget that the bulk of the Apple Lisa operating system and
> >at least large parts of the original Macintosh system software were also
> >implemented in Pascal (though IIRC hand-translated into 68k assembly
> >language), which was a pretty big mainstream success for proving
> >Pascal as suitable for developing systems software.
> 
> At the time, it was sometimes interpreted differently:
> "Apple hired brilliant people for the project.  BUT, they had so little 
> real-world experience that they didn't even realize what a mistake it 
> would be to write an OS in a high level language.  Apple had to rewrite it 
> in assembly for the Mac, to make it fast enough to be usable.  Is Steve 
> Jobs color blind?  He keeps trying to make machines with very high 
> resolution, but balck and white, and keeps trying to seal them off from 
> the rest of the world."(- cHead)

  I thought the primary reason for the Pascal->hand assembly was more due to
memory contraints (64K ROM, 128K RAM for the original Mac) than actual
speed (although that too, was probably a concern).

  -spc



Re: strangest systems I've sent email from

2016-05-17 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> This has been waiting for a reply for too long...

  As has this ...

> On 4 May 2016 at 20:59, Sean Conner <s...@conman.org> wrote:
> >
> >   Part of that was the MMU-less 68000.  It certainly made message passing
> > cheap (since you could just send a pointer and avoid copying the message)
> 
> Well, yes. I know several Amiga fans who refer to classic AmigaOS as
> being a de-facto microkernel implementation, but ISTM that that is
> overly simplistic. The point of microkernels, ISTM, is that the
> different elements of an OS are in different processes, isolated by
> memory management, and communicate over defined interfaces to work
> together to provide the functionality of a conventional monolithic
> kernel.

  Nope, memory management is not a requirement for a microkernel.  It's a
"nice to have" but not "fundamental to implementation." Just as you can have
a preemptive kernel on a CPU without memory management (any 68000 based
system) or user/kernel level instruction split (any 8-bit CPU).

> If they're all in the same memory space, then even if they're
> functionally separate, they can communicate through shared memory --

  While the Amiga may have "cheated" by passing a reference to the message
instead of copying it, conceptually, it was passing a message (for all the
user knows, the message *could* be copied before being sent).  I still
consider AmigaOS as a message based operating system.  

  Also, QNX was first written for the 8088, a machine not known for having a
memory management unit, nor supervisor mode instructions.

> >  I think what made the Amiga so fast (even with a 7.1MHz CPU)
> > was the specialized hardware.  You pretty much used the MC68000 to script
> > the hardware.
> 
> That seems a bit harsh! :-)

  Not in light of this blog article: http://prog21.dadgum.com/173.html

  While I might not fully agree with his views, he does make some compelling
arguments and makes me think.

> But Curtis Yarvin is a strange person, and at least via his
> pseudonymous mouthpiece Mencius Moldbug, has some unpalatable views.
> 
> You are, I presume, aware of the controversy over his appearance at
> LambdaConf this year?

  Yes I am.  My view:  no one is forcing you to attend his talk.  And if no
one attends his talks, the liklihood of him appearing again (or at another
conference) goes down.  What is wrong with these people?

> >   Nice in theory.  Glacial performance in practice.
> 
> Everything was glacial once.
> 
> We've had 4 decades of very well-funded R aimed at producing faster
> C machines. Oddly, x86 has remained ahead of the pack and most of the
> RISC families ended up sidelined, except ARM. Funny how things turn
> out.

  The Wintel monopoly of the desktop flooded Intel with enough money to keep
the x86 line going.  Given enough money, even pigs can fly.

  Internally, the x86 lines is RISC.  The legacy instructions are read in
and translated into an internal machine lanuage that is more RISC like than
CISC.  All sorts of crazy things going on inside that CPU architecture.

> >   The Lisp machines had tagged memory to help with the garbage collection
> > and avoid wasting tons of memory.  Yeah, it also had CPU instructions like
> > CAR and CDR (even the IBM 704 had those [4]).  Even the VAX nad QUEUE
> > instructions to add and remove items from a linked list.  I think it's
> > really the tagged memory that made the Lisp machines special.
> 
> We have 64-bit machines now. GPUs are wider still. I think we could
> afford a few tag bits.

  I personally wouldn't mind a few user bits per byte myself.  I'm not sure
we'll ever see such a system.

> >  Of course we need
> > to burn the disc packs.
> 
> I don't understand this.

  It's in reference to Alan Kay saying "burn the disc packs" with respect to
Smalltalk (which I was told is a mistake on my part, but then everybody
failed to read Alan's mind about "object oriented" programming and he's
still pissed off about that, so misunderstanding him seems to be par for
course).

  It's also an oblique reference to Charles Moore, who has gone on record as
saying the ANSI Forth Standard is a mistake that no one should use---in
fact, he's gone as far as saying that *any* "standard" Forth misses the
point and that if you want Forth, write it your damn self!

> If you mean that, in order to get to saner, more productive, more
> powerful computer architectures, we need to throw away much of what's
> been built and go right back to building new foundations, then yes, I
> fear so.

  Careful.  Read up on the Intel 432, a state of the art CPU in 1981.

> Yes, tear down the foundations and rebuild, but top of the new
> replacement, much existing code could, in principle, be retained and
> re-used.

  And Real Soon Now, we'll all be running point-to-point connections on
IPv6 ... 

  -spc 


Re: strangest systems I've sent email from

2016-05-04 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> On 29 April 2016 at 21:06, Sean Conner <s...@conman.org> wrote:
> > It was thus said that the Great Liam Proven once stated:
> 
> >   I read that and it doesn't really seem that CAOS would have been much
> > better than what actually came out.  Okay, the potentially better resource
> > tracking would be nice, but that's about it really.
> 
> The story of ARX, the unfinished Acorn OS in Modula-2 for the
> then-prototype Archimedes, is similar.
> 
> No, it probably wouldn't have been all that radical.
> 
> I wonder how much of Amiga OS' famed performance, compactness, etc.
> was a direct result of its adaptation to the MMU-less 68000, and thus
> could never have been implemented in a way that could have been made
> more robust on later chips such as the 68030?

  Part of that was the MMU-less 68000.  It certainly made message passing
cheap (since you could just send a pointer and avoid copying the message)
but QNX shows that even with copying, you can still have a fast operating
system [1].  I think what made the Amiga so fast (even with a 7.1MHz CPU)
was the specialized hardware.  You pretty much used the MC68000 to script
the hardware.

> >   I spent some hours on the Urbit site.  Between the obscure writing,
> > entirely new jargon and the "we're going to change the world" attitude,
> > it very much feels like the Xanadu Project.
> 
> I am not sure I'm the person to try to summarise it.
> 
> I've nicked my own effort from my tech blog:
> 
> I've not tried Urbit. (Yet.)
> 
> But my impression is this:
> 
> It's not obfuscatory for the hell of it. It is, yes, but for a valid
> reason: that he doesn't want to waste time explaining or supporting
> it. It's hard because you need to be v v bright to fathom it;
> obscurity is a user filter.

  Red flag #1.

> He claims NOT to be a Lisp type, not to have known anything much about
> the language or LispMs, & to have re-invented some of the underlying
> ideas independently. I'm not sure I believe this.
> 
> My view of it from a technical perspective is this. (This may sound
> over-dramatic.)
> 
> We are so mired in the C world that modern CPUs are essentially C
> machines. The conceptual model of C, of essentially all compilers, OSes,
> imperative languages,  is a flawed one -- it is too simple an
> abstraction. Q.v. http://www.loper-os.org/?p=55

  Ah yes, Stanislav.  Yet anohther person who goes on and on about how bad
things are and makes oblique references to a better way without ever going
into detail and expecting everyone to read his mind (yes, I don't have a
high opinion of him either).  

  And you do realize that Stanislav does not think highly of Urbit (he
considers Yarvin as being deluded [2]).

> Instead of bytes & blocks of them, the basic unit is the list.
> Operations are defined in terms of lists, not bytes. You define a few
> very simple operations & that's all you need.

  Nice in theory.  Glacial performance in practice.

> The way LispMs worked, AIUI, is that the machine language wasn't Lisp,
> it was something far simpler, but designed to map onto Lisp concepts.
> 
> I have been told that modern CPU design & optimisations & so on map
> really poorly onto this set of primitives. That LispM CPUs were stack
> machines, but modern processors are register machines. I am not
> competent to judge the truth of this.

  The Lisp machines had tagged memory to help with the garbage collection
and avoid wasting tons of memory.  Yeah, it also had CPU instructions like
CAR and CDR (even the IBM 704 had those [4]).  Even the VAX nad QUEUE
instructions to add and remove items from a linked list.  I think it's
really the tagged memory that made the Lisp machines special.

> If Yarvin's claims are to be believed, he has done 2 intertwined things:
> 
> [1] Experimentally or theoretically worked out something akin to these
> primitives.
> [2] Found or worked out a way to map them onto modern CPUs.

  List comprehension I believe.

> This is his "machine code". Something that is not directly connected
> or associated with modern CPUs' machine languages. He has built
> something OTHER but defined his own odd language to describe it &
> implement it. He has DELIBERATELY made it unlike anything else so you
> don't bring across preconceptions & mental impurities. You need to
> start over.

  Eh.  I see that, and raise you a purely functional (as in---pure
functions, no data) implementation of FizzBuzz:

https://codon.com/programming-with-nothing

> But, as far as I can judge, the design is sane, clean, & I am taking
> it that he has reasons for the weirdness. I don't think it's
> gratuitous.

  We'll have to agree to disagree 

Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Sean Conner
It was thus said that the Great Chuck Guzis once stated:
> On 04/30/2016 02:07 PM, Mouse wrote:
> 
> > Reading this really gives me the impression that it's time to fork
> > C. There seems to me to be a need for two different languages, which
> > I might slightly inaccurately call the one C used to be and the one
> > it has become (and is becoming).
> 
> 
> I vividly recall back in the 80s trying to take what we learned about
> aggressive optimization of Fortran (or FORTRAN, take your pick) and
> apply it to C.   One of the tougher nuts was the issue of pointers.
> While pointers are typed in C (including void), it's very difficult for
> an automatic optimizer to figure out exactly what's being pointed to,
> particularly when a pointer is passed as arguments or re-used.

  I believe that's what the C99 keyword "restrict" is meant to address.  

  -spc



Re: smalltalk and lisp (was: strangest systems I've sent email from)

2016-04-29 Thread Sean Conner
It was thus said that the Great Swift Griggs once stated:
> 
> I don't want to bolt 
> on anything else, just let me define the same function twice with two 
> different parameter lists and I'll be one happy dude.

  The problem with that is the function name mangling (in object files)
needs to be standardized or you'll end up with the C++ mess we have now.

  And yes, that would be nice to have in C.

> Same here. I'll do a tutorial or read a book for anything even if I dont' 
> do much with it. Hehe, this might make you laugh, but my last two were 
> AREXX (using AROS) and FORTRAN. I'm pissed at myself for not learning 
> AREXX back when the Amiga was kickin'. FORTRAN was a mind trip. I felt it 
> had some kind of relation to Pascal (just certain things). It made me want 
> to go out and do some X-Ray crystallography just so I could write me some 
> applicable FORTRAN code, hehe.

  FORTRAN always reminded me of a slightly better BASIC.  Then again, that
was FORTRAM-77 I was using (first programming language taught in college).

  -spc



Re: strangest systems I've sent email from

2016-04-29 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> On 27 April 2016 at 22:13, Sean Conner <s...@conman.org> wrote:
> 
> Do you really think it's growing? I'd like very much to believe that.
> I see little sign of it. I do hope you're right.

  I read Hacker News and some of the more programmer related parts of
Reddit, and yes, there are some vocal people there that would like to see C
outlawed.  I, personally, don't agree with that.  I would however, like to
see C programmers know assembly language before using C (I think that would
help a lot, especially with pointer usage).

> >   As for CAOS, I haven't heard of it (and yes, I did the Amiga thing in the
> > early 90s).  What was unique about it?  And as much as I loved the Amiga,
> > the GUI API (at least 1.x version) was very tied to the hardware and the OS
> > was very much uniprocessor in design.
> 
> There's not a lot about it out there, but there's some.
> 
> http://amigaworld.net/modules/newbb/viewtopic.php?topic_id=35526=32&14

  I read that and it doesn't really seem that CAOS would have been much
better than what actually came out.  Okay, the potentially better resource
tracking would be nice, but that's about it really.  I was expecting
something like Synthesis OS:

http://valerieaurora.org/synthesis/SynthesisOS/

(which *is* mind blowing and I wish the code was available).

> >   COBRA was dead by the mid-90s and had nothing (that I know of) to do with
> > Linux.  And the lumbering GUI apps, RPC, etc that you are complaining about
> > is the userland stuff---nothing to do with the Linux kernel (okay, perhaps
> > I'm nitpicking here).
> 
> GNOME 1 was heavily based on CORBA. (I believe -- but am not sure --
> that later versions discarded much of it.) KDE reinvented that
> particular wheel.

  I blew that one---CORBA lived for about ten years longer than I expected.

> Compared to a decade before that? Better but more restrictive
> firmware. Slimmer cabling, faster buses. More cores.
> 
> Compared to a decade before that? Now the OSes are more solid and
> reliable. They can do video and 3D with less work now, even within a
> GUI. The ports are smaller, simpler, more robust. The internal
> interconnects have changed and the OSes now have proper 32-bit
> kernels.
> 
> Actual functionality hasn't vastly changed since the mid-90s, it's
> just got better.
> 
> The mid-90s PC merely managed to reproduce the GUIs, multitasking and
> sound/colour support of mid-80s proprietary systems, on the COTS PC
> platform.
> 
> I'd argue the last big change was the Mac and GUIs, just over 30 years ago.
> 
> And I reiterate:
> 
> >> That makes me despair.
> >>
> >> We have poor-quality tools, built on poorly-designed OSes, running on
> >> poorly-designed chips. Occasionally, fragments of older better ways,
> >> such as functional-programming tools, or Lisp-based development
> >> environments, are layered on top of them, but while they're useful in
> >> their way, they can't fix the real problems underneath.

  Wait ... what?  You first decried about poorly-designed OSes, and then
went on to say there were better than before?  I'm confused.  Or are you
saying that we should have something *other* than what we do have?

> >> Occasionally someone comes along and points this out and shows a
> >> better way -- such as Curtis Yarvin's Urbit.
> >
> >   I'm still not convinced Curtis isn't trolling with Urbit.  Like Alan Kay,
> > he's not saying anything, expecting us to figure out what he means (and then
> > yell at us for failing to successfully read his mind).
> 
> Oh no, he has built something amazing, and better still, he has a plan
> and a justification for it. I fear it's just too /different/ for most
> people, just like functional programming is.

  I spent some hours on the Urbit site.  Between the obscure writing,
entirely new jargon and the "we're going to change the world" attitude, it
very much feels like the Xanadu Project.

  -spc 


Re: strangest systems I've sent email from

2016-04-27 Thread Sean Conner
It was thus said that the Great Noel Chiappa once stated:
> > From: Liam Proven
> 
> > There's the not-remotely-safe kinda-sorta C in a web browser,
> > Javascript.
> 
> Love the rant, which I mostly agree with (_especially_ that one). A couple of
> comments:
> 
> > So they still have C like holes and there are frequent patches and
> > updates to try to make them able to retain some water for a short time,
> > while the "cyber criminals" make hundreds of millions.
> 
> It's not clear to me that a 'better language' is going to get rid of that,
> because there will always be bugs (and the bigger the application, and the
> more it gets changed, the more there will be). 

"Every program has at least one bug and can be shortened by at least
one instruction---from which, by induction, one can deduce that
every program can be reduced to one instruction which doesn't work."

  Proof: https://en.wikipedia.org/wiki/IEFBR14

> The vibe I get from my
> knowledge of security is that it takes a secure OS, running on hardware that
> enforces security, to really fix the problem. (Google "Roger Schell".)

  We're getting there---on current Macs, not even root can delete all files
anymore.  You can still run unsigned programs, but you have to change a
setting for that.

  For now.

  -spc (Be careful what you wish for)


Re: strangest systems I've sent email from

2016-04-27 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> On 26 April 2016 at 16:41, Liam Proven  wrote:
> 
> When I was playing with home micros (mainly Sinclair and Amstrad; the
> American stuff was just too expensive for Brits in the early-to-mid
> 1980s), the culture was that Real Men programmed in assembler and the
> main battle was Z80 versus 6502, with a few weirdos saying that 6809
> was better than either. BASIC was the language for beginners, and a
> few weirdos maintained that Forth was better.

  The 6908 *is* better than either the Z80 or the 6502 (yes, I'm one of
*those* 8-)

> So now, it's Unix except for the single remaining mainstream
> proprietary system: Windows. Unix today means Linux, while the
> weirdoes use FreeBSD. Everything else seems to be more or less a
> rounding error.

  There are still VxWorks and QNX in embedded systems (I think both are now
flying through space on various probes) so it's not quite a monoculture. 
But yes, the desktop does have severe moncultures.

> C always was like carrying water in a sieve, so now, we have multiple
> C derivatives, trying to patch the holes. 

  Citation needed.  C derivatives?  The only one I'm aware of is C++ and
that's a far way from C nowadays (and no, using curly braces does not make
something a C derivative).

> C++ has grown up but it's
> like Ada now: so huge that nobody understands it all, but actually, a
> fairly usable tool.
> 
> There's the kinda-sorta FOSS "safe C++ in a VM", Java. The proprietary
> kinda-sorta "safe C++ in a VM", C#. There's the not-remotely-safe
> kinda-sorta C in a web browser, Javascript.

  They may be implemented in C, but they're all a far cry from C (unless
youmean they're imperative languages, then yes, they're "like" C in that
reguard).

> And dozens of others, of course.

  Rust is now written in Rust.  Go is now written in Go.  Same with D. 
There are modern alternatives to C.  And if the community is anything to go
by, there is a slowly growing contigent of programmers that would outlaw the
use of C (punishable by death).

> So they still have C like holes and there are frequent patches and
> updates to try to make them able to retain some water for a short
> time, while the "cyber criminals" make hundreds of millions.

  I seriously think outlawing C will not fix the problems, but I think I'm
in the minority on that feeling.

> Anything else is "uncommercial" or "not viable for real world use".
> 
> Borland totally dropped the ball and lost a nice little earner in
> Delphi, but it continues as Free Pascal and so on.
> 
> Apple goes its own way, but has forgotten the truly innovative
> projects it had pre-NeXT, such as Dylan.
> 
> There were real projects that were actually used for real work, like
> Oberon the OS, written in Oberon the language. Real pioneering work in
> UIs, such as Jef Raskin's machines, the original Mac and Canon Cat --
> forgotten. People rhapsodise over the Amiga and forget that the
> planned OS, CAOS, to be as radical as the hardware, never made it out
> of the lab. Same, on a smaller scale, with the Acorn Archimedes.

  While the Canon Cat was innovative, perhaps it was too early.  We were
still in the era of general purpose computers and the idea of an
"information appliance" was still in its infancy and perhaps, not an idea
people were willing to deal with at the time.  Also, how easy was it to get
data *out* of the Canon Cat?  (now that I think about it---it came with a
disk drive, so in theory, possible)  You could word process, do some
calculations, simple programming ... but no Solitare.

  As for CAOS, I haven't heard of it (and yes, I did the Amiga thing in the
early 90s).  What was unique about it?  And as much as I loved the Amiga,
the GUI API (at least 1.x version) was very tied to the hardware and the OS
was very much uniprocessor in design.  

> Despite that, of course, Lisp never went away. People still use it,
> but they keep their heads down and get on with it.
> 
> Much the same applies to Smalltalk. Still there, still in use, still
> making real money and doing real work, but forgotten all the same.
> 
> The Lisp Machines and Smalltalk boxes lost the workstation war. Unix
> won, and as history is written by the victors, now the alternatives
> are forgotten or dismissed as weird kooky toys of no serious merit.
> 
> The senior Apple people didn't understand the essence of what they saw
> at PARC: they only saw the chrome. 

  To be fair, *everybody* missed the essence of what they did at PARC; even
Alan Kay wasn't of much help ("I meant message passing, *not* objects!"
"Then why didn't you say so earlier?" "And Smalltalk was *never* meant to be
standardized!  It was meant to be replaced with something else every six
months!" "Uh, Alan, that's *not* how industry works.").

  The problem with the Lisp machines (from what I can see) is not that they
were bad, but they were too expensive and the cheap Unix workstations
overtook them in 

Re: Mac "Workgroup Server" (or "network server") hardware & AIX

2016-04-27 Thread Sean Conner
It was thus said that the Great Swift Griggs once stated:
> > Oh, and while I?m at it, both vi and emacs suck.  Give me TPU! :-)
> 
> Heh, I'm one of the few Unix guys that might be inclined to agree. I can 
> use both. However, I got used to Wordstar, then the IDE in Borland 
> products (which is similar to Wordstar). So, my favorite editor is "joe" 
> not Vi or Emacs, but TEHO. For me, using an editor is about familiarity, 
> even if something takes more keystrokes etc... I find it odd that people 
> actually argue about editors, but that's just a difference in value 
> system.

  They argue because each wants to spead "The Good News".  

  And for the record, I too use joe, but only because I can't use PE any
more.

  -spc (See "The Life of Brian" to see where this leads ... )


Re: History [was Re: strangest systems I've sent email from]

2016-04-27 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> 
> > What I've often wondered is why there are so many IT people with the
> > same sort of laments and we haven't all collectively built our own
> > networks over wireless ?
> 
> The crazy patchwork quilt of regulations applying to amateur use of
> radio spectrum, is my guess.  Now, I would add the depressingly high
> chance of being invaded by the masses if/when we build something big
> enough to be useful.  We've already built one network only to have it
> invaded and overrun; why would we expect anything different to happen
> to the new one?

  Just look into the political machinations of what was known as FidoNet to
see how this could end up.

  -spc ("The fighting was intense because the stakes were so low ... ")



Re: History [was Re: strangest systems I've sent email from]

2016-04-27 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> 
> And I spend over $80/month for DSL to a provider that gives me a /29
> and a /60 from globally routed space.  (That everything is now CIDR
> blocks is another loss; I am not fond of the desupporting of
> noncontiguous subnet masks, even though I can understand it - I'm the
> only person I've ever heard of running that way other than for
> testing.)

  One benefit of contiguous subnet masks is that it makes routing faster. 
It's still a linear search, but it's based on the average length of the
netmask instead of the total number of entries.  Granted, I'm looking at
this from the perspective of a router and not a computer (which might have a
few routing entries vs. hundreds or thousands).  I've implemented such an
algorithm to deal with spam (quick address lookup).

  -spc


Re: The Ivory Tower saga was Re: strangest systems I've sent email

2016-04-27 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> 
> Take also, for example, Lisp.  I've used Lisp.  I even wrote a Lisp
> engine.  I love the language, even though I almost never use it.  But
> some of the mental patterns it has given me inform much of the code I
> write regardless of language.  I have seen it said that a language that
> does not change the way you think about programming is not worth
> knowing.  

  In my case, it wasn't knowing a language that changed how I think about
programming, but a book, _Thinking Forth_ by Leo Brodie.  Think Agile
programming but in the mid-80s instead of the early 2000s.  That it
described programming in Forth is immaterial---I've been able to apply the
lessons from that book to every language I've used since reading it.

  The other book that changed how I program was _Writing Solid Code_ by
Steve Maguire (from Microsoft Press of all places).  Think programming by
contract and single purpose functions [1].

  The only languages that have had any influence on how I program are the
functional languages (summary: no globals, pass in state and avoid mutations
as much as possible)---it's not one single language.

> People have remarked on the knowing of multiple languages.  I would say
> it matters terribly what the languages in question are.

  I think knowing the different families and how they work is important than
any individual language.

> Knowing C,
> JavaScript, Pascal, and awk is, for example, very very different from
> knowing Lisp, Prolog, Objective-C, and TeX, even though each one hits
> the same putative "knows four languages" tickbox on a hiring form.  

  True, and if I were hiring, I might worry that the programmer who knew
Lisp, Prolog, Objective-C and TeX might not want to work here where we deal
with C, C++ and Javascript as they're "not as nice to work with."  But I
wouldn't discount them either.

> I
> draw a sharp distinction between "programming" and "programming in
> $LANGUAGE", for any value of $LANGUAGE.  Someone who knows the latter
> may be able to get a job writing code in $LANGUAGE...but someone who
> groks the former can pick up any language in relatively short order.
> The bracketed note in the second paragraph of content on
> http://www.catb.org/jargon/html/personality.html is exactly the sort of
> thing I'm talking about here; ESR taught himself TeX by the simple
> expedient of reading the TeXBook.

  You mean not everybody does this?

> One particular message seems to me to call for individual response:
> 
> > There are two major language families: declarative and imperative.
> > [...]  Declarative langauges are [...].  A few langauges under this
> > family:
> 
> > Prolog
> 
> Agreed, though AIUI the presence of cuts weakens this somewhat.
> (Caveat, I don't really grok Prolog; I may be misunderstanding.)

  Cuts are a form of optimization (if I understand what they do).  They just
cut down on the problem space.

> > make (and yes, make is a declarative language)
> 
> Only, I would say, in its simplest forms.  Every make implementation I
> know of (including both at least one BSD variant and at least one
> version of GNU make) stops looking very declarative when you have to do
> anything at all complex.

  I think those are poorly written Makefiles then.  I've dived into GnuMake
rather deeply over the past year, and it's amazing how concise you can make
a Makefile, even for a large project.  And the hardest part is really
getting the dependencies correct (but when you do, "make -j" really
flies on modern hardware).

> > SQL
> 
> I don't see SQL as declarative.  I see it as imperative, with the
> relational table as its primary data type.  That the programmer doesn't
> have to hand-hold the implementation in figuring out how to perform a
> SELECT doesn't make SQL declarative any more than the code not
> describing how to implement mapcar makes Lisp declarative.

  I don't see it a imperative.  

SELECT what FROM there WHERE ... ORDER-BY ... 

  You just describe what you want and how you want it ordered, unless I'm
missing something else.  Granted, I don't work a lot with SQL (or with
stored procedures).

  -spc 

[1] The book used the realloc() function as the example---this function
can allocate and free memory, depending upon the parameters.  In
fact, every possible combination of parameters is valid but will do
radically different things.  It doesn't make testing harder (in this
case) but a logical misstep is harder to track down.

Better would be to have:

void *memory_alloc(size_t);
void *memory_grow(void *,size_t);
void *memory_shrink(void *,size_t);
void  memory_free(void *);

Where you are explicit about what you are doing (and for
convenience, I can see memory_grow(NULL,x) being valid (allocate
memory) to mimic the reason realloc() is mostly used for.



Re: Mac "Workgroup Server" (or "network server") hardware & AIX

2016-04-26 Thread Sean Conner
It was thus said that the Great Swift Griggs once stated:
> On Tue, 26 Apr 2016, Jerry Kemp wrote:
> 
> > Unix is a very religious subject.  My last intent is to get on someone's 
> > bad side here.
> 
> Tell me about it. I'm just lucky/glad to have avoided the VMS vs Unix wars 
> in the 90's. Too much drama. :-)

  What VMS vs Unix wars?  I don't recall any in the 90s.  Perhaps in the 80s
...

  -spc (Who used VMS briefly in college ... )



Re: The Ivory Tower saga was Re: strangest systems I've sent email from

2016-04-26 Thread Sean Conner
It was thus said that the Great Raymond Wiker once stated:
> 
> > On 26 Apr 2016, at 05:39 , Swift Griggs  wrote:
> > 
> > It's probably a bad idea to dismiss anyone's experience when you haven't 
> > "walked a mile in his moccasins.", including mine.  Though my attempt may 
> > have been inarticulate, I was talking about my own experience in academia 
> > and not trying to pick a fight with every LISP coder on the planet. If I 
> > was more clever, I'd have probably had the foresight to say simply say 
> > $academic_only_language instead of using the pit-bull attack trigger word: 
> > LISP.
> 
> If you think that Lisp is an "academic only language", you probably need
> to spend a little time with actually using it.

  AutoCAD uses LISP as a scripting language.  EMACS also has a LISP.  Paul
Graham (of Y-Combinator) also made his money on LISP (Viaweb, which later
became Yahoo Stores).  So yes, it's not an "academic only" language, but it
is different enough to make it difficult to find programmers.

  Because of the nature of LISP (LISP code is itself stored as a LISP
object) it becomes easy to algorithmetically manipulate LISP code that the
default method of implementation is to write a domain-specific language
(DSL) that solves the problem you want trivially, therefore if you use LISP,
you end up with something that may look like LISP but isn't (if that makes
sense).

  -spc (Who has a love/hate relationship with LISP---I love to hate it (no,
no, I kid!  I love the concept, but hate the language [1]))

[1] I have the same issues with Forth [2]

[2] Which is a backwards LISP.


Re: strangest systems I've sent email from

2016-04-26 Thread Sean Conner
It was thus said that the Great Tapley, Mark once stated:
> On Apr 25, 2016, at 4:46 PM, Brian L. Stuart  wrote:
> 
> > ...To tell you the truth, I'm not very likely to hire anyone who isn't
> > conversant with at least half a dozen different languages.  ...
> 
> Although I agree with almost everything Brian said in his post, I’ll posit
> at least one exception here. There exist languages (the Mathematica
> programming language is the one I’m familiar with) which permit
> programming in multiple different styles - procedural, list-processing,
> object-oriented, etc.. I would be pretty willing to consider a candidate
> who understood the differences, and could select the appropriate
> programming style for the task at hand, even if they were familiar with
> only the one “language”. But, it would not be trivial to demonstrate that
> the candidate actually had that breadth of understanding; production of
> sample code in a half-dozen languages would be an easier metric to apply,
> so maybe my exception is not useful.

  There are two major language families: declarative and imperative.  I feel
ike a programmer should be familiar with the two families.  Declarative
langauges are where you describe *what* you want and leave it up to the
computer (technically, the implementation) to figure out how to obtain what
you want.  A few langauges under this family:

Prolog
make (and yes, make is a declarative language)
SQL

Imperative is where you describe *how* to do something to the computer and
hope it gives you what you want.  Under this family there are three
sub-families:

  Procedural---your typical programming languages, C, Pascal, BASIC, COBOL,
Fortran, are all examples of procedural languages and we pretty much know
and understand these languages.

  Functional---still a type of imperative, but more centered around code
(functions actually) and side effects are very controlled (and globals right
out!).  Global variables are difficult to instantiate (if at all).  Examples
are Haskel, F#, ML, Hope.

  Object oriented---again, another form of imperative, but centered around
data instead of functions (it's the flip-side of functional).  Examples of
this are Smalltalk, Java, C#.

  There are languages that can have multiple features, like C++ (procedural
and object-oriented), Lisp (declarative and imperative), Forth (declarative
and imperative), Python (procedural, functional, object-oriented).

  -spc (Who likes classical software ... )
  


Re: Accelerator boards - no future? Bad business?

2016-04-22 Thread Sean Conner
It was thus said that the Great Swift Griggs once stated:
> 
> Remember all the accelerator boards for the Mac, Amiga, and even PCs in the
> 90's ?  I've often wished that I could get something similar on my older SGI
> systems.  For example, fitting an R16k into an O2 or doing dynamic
> translation on a 4.0Ghz i7. 

  One major problem with adding a faster CPU to an SGI is the MIPS chip
itself---code compiled for one MIPS CPU (say, the R3000) won't run on
another MIPS CPU (say, the R4400) due to the differences in the pipeline. 
MIPS compilers were specific for a chip because such details were not hidden
in the CPU itself, but left to the compiler to deal with.

  -spc (Who had near exclusive use of an SGI in college in the early 90s)



Re: PDP-10 programming [was RE: Dumb Terminal games (was Re: Looking for a small fast VAX development machine)]

2016-03-01 Thread Sean Conner
It was thus said that the Great Rich Alderson once stated:
> 
> For most hobbyists, even $100 is too much.  I was simply astounded at the
> chutzpah of the seller--right there on the Amazon list--who was asking
> nearly $1500 for a copy.

  I think that comes from an unchecked computer algorithm, not simple greed.
I think what's happening here is someone (some Amazon third party) offered
the book for, say, $5.  Another third party scans Amazon for such books, and
offers it for say, $6, with the hope that you (the potential buyer) will
only see their their offer for $6 and buy from them, at which point they
will buy it for $5 from the original seller, sell it to you for $6 and
pocket the $1 profit.  The problem comes when a third third-party seller
sees the offer for $6 and does the same thing as the second one, only now
they're offering it for $7, will pay $6 for it and pocket $1 profit.

  Keep repeating that process and you end up with books selling for $1500.

  -spc (Who knows?  If you keep searching, you might find the original
seller selling it for $5 ... )


  1   2   >