Re: [DNG] Nasty Linux systemd security bug revealed
g4sra via Dng said on Sun, 25 Jul 2021 10:26:46 + >And this is why ever sice I entered the profession I have maintained >that programmers should be vetted and certified in a similar manner to >other professions such as doctors and lawyers, carrying a similar >social status. Only those with the appropriate qualification and >experience should be permitted to work in certain sectors. I'm glad you said "certain sectors". I'm glad there are other sectors (office automation comes to mind) where a guy who gets proficient with the computer on his kitchen table can get paid work, and learn there. Otherwise, programming would be restricted to folks rich enough for their parents to send them to college to learn programming, and then a triciary education to learn all the security, defense and engineering stuff, and like doctors and lawyers, they wouldn't start making any real money until their late 20's. Programmers would be selected for family wealth, not for desire and aptitude. As long as most sectors let anybody who can write code write code, programming remains a great source of upward mobility, and if a well paid office automation programmer wants to become a medical equipment programmer, he or she can then take courses and get a cert while still earning a good living. When I busted into programming, the most common traits of my fellow programmers were that they played musical instruments, rode bicycles, and had a real talent and desire for programming. Back then, when I interviewed new programmers with 4 year degrees, they couldn't code their way out of a paper bag. SteveT Steve Litt Spring 2021 featured book: Troubleshooting Techniques of the Successful Technologist http://www.troubleshooters.com/techniques ___ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng
Re: [DNG] Nasty Linux systemd security bug revealed
On Sunday, July 25th, 2021 at 6:53 PM, Simon Hobson wrote: > Andreas Messer a...@bastelmap.de wrote: > > > Once we had a crash in > > simple limit switch device. As a result the high-rack robot pushed a > > pallet in 15m height out of the rack. Fortunately, it was just another > > robot which was destroyed (stood just below) - not a human being. Still > > a very expensive case for the company. So I'm used implement a lot of > > checks :-). (Actually we even don't use heap allocation after booting > > the firmware) > > Back in the 90s I had an acquaintance that did a lot of consulting for sites > with "management issues" and running "big iron". He got a jolly to see a site > that was run by systems from that vendor - the very early days of warehouse > automation. High bar warehousing, automated forklifts, with operators riding > along to move boxes between pallet on the forks and pallet on the racks - it > was a highly seasonal business, and in the run up to Christmas they be > getting order in in all sorts of quantities, putting a small box on a pallet > is highly inefficient so the need for manual handling to combine multiple > shipments onto one pallet on the racks. > Apparently the average stay before the operators quit from the stress was > only 3 months ! > Then one day a forklift went wrong - fortunately with no operator on board. > It accelerated in an uncontrolled manner until it crashed through the side of > the building and fell over in the field next door - at which point, all the > operators walked out ! > > g4sra via Dng dng@lists.dyne.org wrote: > > > There is nothing stopping me for applying for systems programming work in > > Nuclear Power Stations, Air Traffic Control, Industrial Robotics, etc... > Yes, but if you look a little deeper, in that sort of industry the > programmers don't get to "just get on with it". It doesn't read like you have been exposed to the same industry working practices I have, because that is exactly what happens until deadlines are not met. > The higher the risk, the higher the degree of risk management. And the personnel performing the risk management are of no greater standing that the personnel writing the software. > By the time the programmer gets to write code, there's been a lot of safety > based design - and when they've written the code, there's a lot of testing > and assurance before it can go live. No. There is 'testing and assurance' performed to the level agreed during the planning stage, planned by personnel of no greater standing... > Of course, if you are Boeing and designing systems for aircraft - then it > seems it's a different matter ! > > Simon > Maybe things have changed in the last ten years without my knowledge since I fulfilled the role of Security Auditor without any formal certification, reporting to the Board of an International Telecommunications company, but I doubt it. Put more simplistically It does not how many spelling checks are put in place if the spelling checkers cannot spell. or as I prefer Monkeys checking the work of Monkeys designed by Monkeys is not going to guarantee quality, it is only going to guarantee the slinging of faeces. publickey - g4sra@protonmail.com - 0x42E94623.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature ___ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng
Re: [DNG] Nasty Linux systemd security bug revealed
Andreas Messer wrote: > Once we had a crash in > simple limit switch device. As a result the high-rack robot pushed a > pallet in 15m height out of the rack. Fortunately, it was just another > robot which was destroyed (stood just below) - not a human being. Still > a very expensive case for the company. So I'm used implement a lot of > checks :-). (Actually we even don't use heap allocation after booting > the firmware) Back in the 90s I had an acquaintance that did a lot of consulting for sites with "management issues" and running "big iron". He got a jolly to see a site that was run by systems from that vendor - the very early days of warehouse automation. High bar warehousing, automated forklifts, with operators riding along to move boxes between pallet on the forks and pallet on the racks - it was a highly seasonal business, and in the run up to Christmas they be getting order in in all sorts of quantities, putting a small box on a pallet is highly inefficient so the need for manual handling to combine multiple shipments onto one pallet on the racks. Apparently the average stay before the operators quit from the stress was only 3 months ! Then one day a forklift went wrong - fortunately with no operator on board. It accelerated in an uncontrolled manner until it crashed through the side of the building and fell over in the field next door - at which point, all the operators walked out ! g4sra via Dng wrote: > There is nothing stopping *me* for applying for systems programming work in > Nuclear Power Stations, Air Traffic Control, Industrial Robotics, etc... Yes, but if you look a little deeper, in that sort of industry the programmers don't get to "just get on with it". The higher the risk, the higher the degree of risk management. By the time the programmer gets to write code, there's been a lot of safety based design - and when they've written the code, there's a lot of testing and assurance before it can go live. Of course, if you are Boeing and designing systems for aircraft - then it seems it's a different matter ! Simon ___ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng
Re: [DNG] Nasty Linux systemd security bug revealed
<--snip--> > Why I'm so critical about letting it crash: I typically deal with stack > sizes of no more around 2-8kB in automation devices and have to be careful > with that. You can't simply let a newspaper printing machine's motor control > crash, 1000's of newspaper pages would be trashed. Once we had a crash in > simple limit switch device. As a result the high-rack robot pushed a > pallet in 15m height out of the rack. Fortunately, it was just another > robot which was destroyed (stood just below) - not a human being. Still > a very expensive case for the company. <--snip--> And this is why ever sice I entered the profession I have maintained that programmers should be vetted and certified in a similar manner to other professions such as doctors and lawyers, carrying a similar social status. Only those with the appropriate qualification and experience should be permitted to work in certain sectors. There is nothing stopping *me* for applying for systems programming work in Nuclear Power Stations, Air Traffic Control, Industrial Robotics, etc... I have personal knowledge of a College classmate who went on to write Air Traffic Control software, personally I would not trust him to write an App for my phone (but would be the first person to call if organising a party). People are going to continue to die until this change happens. publickey - g4sra@protonmail.com - 0x42E94623.asc Description: application/pgp-keys signature.asc Description: OpenPGP digital signature ___ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng
Re: [DNG] Nasty Linux systemd security bug revealed
On Sat, Jul 24, 2021 at 05:35:10PM +0200, Didier Kryn wrote: > However the manual of alloca() states that "There is no error > indication if the stack frame cannot be extended." If the same would > happen with automatic variables, I would expect a crash; otherwise it > would be a serious flaw in the compiler. According to you there is such > a flaw? I have just made a short experiment. On linux, typical stack size is 8MB ( ulimit -s). So using the following test program: stack_overflow.c: #include #include void test(int size, int use_it) { #if 1 volatile int var[size/sizeof(int)]; #else volatile int* var = alloca(size); #endif if(use_it) var[0] = 0; } int main(int argc, char* argv[]) { long size = argc > 1 ? atoi(argv[1]) : 1024; long use_it = argc > 2 ? atoi(argv[2]) : 0; printf("Will be allocating %ldkb stackframe %s access\n", size, use_it ? "with" : "without" ); test(size*1024, use_it); } I get the following results: ...:/tmp$ gcc -o stack_overflow stack_overflow.c ...:/tmp$ ./stack_overflow 16000 0 Will be allocating 16000kb stackframe without access ...:/tmp$ ./stack_overflow 16000 1 Will be allocating 16000kb stackframe with access Speicherzugriffsfehler ...:/tmp$ gcc -o stack_overflow stack_overflow.c -fstack-check ...:/tmp$ ./stack_overflow 16000 0 Will be allocating 16000kb stackframe without access Speicherzugriffsfehler ...:/tmp$ ./stack_overflow 8000 0 Will be allocating 8000kb stackframe without access So if -fstack-overflow is not used, the program will crash only if memory is actually accessed out of bounds of the stack memory. Indeed, accessing the last instead of the first array element does not crash at all. With -fstack-overflow it will already crash on allocation of the array. (as expected) When using the alloca() way, I get identical results. Why I'm so critical about letting it crash: I typically deal with stack sizes of no more around 2-8kB in automation devices and have to be careful with that. You can't simply let a newspaper printing machine's motor control crash, 1000's of newspaper pages would be trashed. Once we had a crash in simple limit switch device. As a result the high-rack robot pushed a pallet in 15m height out of the rack. Fortunately, it was just another robot which was destroyed (stood just below) - not a human being. Still a very expensive case for the company. So I'm used implement a lot of checks :-). (Actually we even don't use heap allocation after booting the firmware) cheers, Andreas -- gnuPG keyid: 8C2BAF51 fingerprint: 28EE 8438 E688 D992 3661 C753 90B3 BAAA 8C2B AF51 signature.asc Description: PGP signature ___ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng