[email protected] (John McKown) writes:
> ​It adds some really nice features to legacy z/OS. But UNIX files can be
> confusing to z/OS programmers because they are more like "memory" than
> "disk" in that they are simply an ordered sequence of _bytes_, not
> _records_. The file system itself does not have _any_ interpretation o​f
> how those bytes are grouped into logical records. z/OS programmers, in
> general, are used to reading individual records when reading a data set.
> When you read a UNIX file, the program must tell the UNIX kernel ("access
> method") how many bytes you want to read. UNIX will return __NO MORE__ than
> that number of bytes. It could return fewer, if there are not that many
> left before "end of file" (an on some other rare occasions). Your code must
> then somehow know where the data you want (aka "this record") ends. Which
> means either the file is composed of fixed length records, hard coded in
> the program, or there is meta information encoded in the file data itself
> which indicates a length (similar to the LLBB field in a z/OS variable
> length data set) for each record. You can't rely on the system "handing"
> you a "logical record: when you do a UNIX read simply because there is no
> such thing, in a general sense.

Unix traditional records are variable length deliminated by trailing
null/zero byte. lots of traditional unix API programming would read w/o
length restriction and common attack is to provide extremely long record
that would overwrite end of buffer being used ... resulting failure
and/or compromise. Lots of pressure to get UNIX (c language) programmers
to use API that specify maximum length read.

I had this discussion with Dennis Ritchie that traditional IBM variable
length used two-byte explicit length prefixing ... while the unix method
saved a byte per record, with just a single (null/zero) byte postfixing
the record ... back in the days of "small memory" and pdp-7 ... birth of
unix
http://www.linfo.org/pdp-7.html

trivia: some of the CTSS people went to the 5th flr and did Multics and
others went to the ibm science center on the 4th floor and did virtual
machines, bunch of online applications, the internal network (also used
for university bitnet/earn) and invented GML. folklore is that some of
the AT&T worked on multics and then returned home and did unix
https://en.wikipedia.org/wiki/Multics#UNIX
dennis
https://en.wikipedia.org/wiki/Dennis_Ritchie

past posts mentioning ibm science center, 4th flr, 545 tech sq
http://www.garlic.com/~lynn/subtopic.html#545tech
CTSS
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System

I joke that when release 1 cp67 was delivered to the university in
Jan1968 it had some code that I completely rewrote ... but then saw very
similar code in unix 20yrs later (possibly all having traced back to
CTSS?). recent posts mentioning early cp67
http://www.garlic.com/~lynn/2017b.html#26 Virtualization's Past Helps Explain 
Its Current Importance
http://www.garlic.com/~lynn/2017b.html#27 Virtualization's Past Helps Explain 
Its Current Importance
http://www.garlic.com/~lynn/2017b.html#28 Virtualization's Past Helps Explain 
Its Current Importance
http://www.garlic.com/~lynn/2017b.html#29 Virtualization's Past Helps Explain 
Its Current Importance
http://www.garlic.com/~lynn/2017b.html#30 Virtualization's Past Helps Explain 
Its Current Importance
http://www.garlic.com/~lynn/2017b.html#32 Virtualization's Past Helps Explain 
Its Current Importance

I've also frequently commented that the original IBM mainframe TCP/IP
product was implemented in vs/pascal and *never* had any of the length
related vulnerabiilties and exploits that have been epidemic in C
language implementations. some past posts
http://www.garlic.com/~lynn/subintegrity.html#buffer

-- 
virtualization experience starting Jan1968, online at home since Mar1970

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to