Re: rage against the machine

2019-04-08 Thread Morlock Elloi
The below is a grim read, and shows what happens when imponderable 
complexity performs very ponderable mass murder. Unlike red-herringing 
here on nettime, it was a very physical fight between humans and 
machines, which humans lost due to limits of muscle power.


Next time someone tries to abstract the murder into some bullshit 
complexity, re-read from the below:


'''Manual trimming means using banal muscle power, insiders call this 
work even "acrobatic". Probably that is why the affected airline 
Ethiopian Airlines in their communication this week, it is very 
unfortunate that the pilots of the crash machine "despite their hard 
work" could not prevent the aircraft to continue the deadly course.'''


They probably died swearing. They knew that it was the machine killing 
them. I wonder if they died screaming at the machine or at its designers?



Machine translated from 
https://www.heise.de/tp/features/Absturz-ET-302-Minuten-des-Schreckens-4365546.html


Preliminary investigation report from Addis Ababa relieves the pilots 
after the second crash of a Boeing 737 Max - and provides dramatic 
insights, at the same time, the question of the relationship between man 
and computer comes to a head


On a dry field a few miles outside the Ethiopian town of Bischoftu, 
flight ET 302 ends on Sunday morning, March 10, at 8:45 am in a 
fireball. For 149 passengers and eight crew members from 33 nations, it 
meant death. Several meters deep, the soil is torn open, the earth 
burned black. A short flight of terrifying moments: The Boeing 737 MAX 8 
machine was barely seven minutes in the air after it had just left the 
Bole Airport of the Ethiopian capital Addis Ababa.


The pilots flew according to the standards

Now, just over a month later, a preliminary report is available. The 
report was eagerly awaited as the circumstances surrounding Flight ET 
302 continue to raise pressing questions. The aircraft's control 
software was soon suspected, as in a crash a few months earlier, in 
which a Lion-Air machine of the same type (also a Boeing 737 Max) 
crashed in Indonesia. This killed 189 people.


Whether the controversial control system of the model family for the 
calamities ultimately alone (or in which constellation) was decisive, 
must be further clarified in the details.  However, the preliminary 
investigation report from Addis Ababa, which the Ethiopian Minister of 
Transport Dagmawit Moges presented to the public at the end of the week, 
provides some information that could help to educate.


For example, the crew of Ethiopian Airlines acted correctly in the 
minutes before the crash and complied with all requirements set by the 
manufacturer Boeing for the critical flight phase. Occasionally even the 
qualification of the crew had been questioned. At first, the pilots were 
acting professionally according to the checklist "Stabilizer Trim 
Runaway" and switched off the electric trim. Nevertheless, you can not 
bring the machine under control. The course of the flight remained 
unstable. According to the research from Addis Ababa, there is no doubt 
that the nose of the machine has been pushed down automatically several 
times without appropriate instructions.


Deadly fiasco

In vain did the crew of the 737 fight to stabilize the situation. Three 
times the captain called to his co-pilot "Pull up!", But it did not 
help. The data from the flight recorder of ET 302 clearly shows that the 
pilots repeatedly switched the automatic control on and off. They 
followed the instructions. The on-board computer stubbornly took over 
and kept the direction, pulling the nose of the aircraft down again and 
again. Enormous forces must have been created, possibly in connection 
with an unusual acceleration - forces that had a dramatic effect on the 
course of the flight and worsened the situation.



Is that why obvious attempts to trim by handwheel failed?  Such manual 
interventions are part of the pilots' flight repertoire - and they are 
usually associated with considerable effort. Manual trimming means using 
banal muscle power, insiders call this work even "acrobatic". Probably 
that is why the affected airline Ethiopian Airlines in their 
communication this week, it is very unfortunate that the pilots of the 
crash machine "despite their hard work" could not prevent the aircraft 
to continue the deadly course.


However, the question also remains after these considerations ultimately 
not answered, why the juggernaut did not continue consistently manually. 
If the electric motors for trim adjustment are disconnected from the 
power supply, the autopilot can actually no longer provide any inputs. 
Did the pilots of ET 302 come up with the right approach, but - under 
enormous stress - changed the "course of action" too hectic and thus 
enabled further trim inputs of a faulty system?


More software problems - rival Airbus rethinks security architecture

While the crew is relieved to a certain extent by the 

Re: rage against the machine

2019-04-04 Thread Morlock Elloi
In case you missed, all narratives about pilots not being 
trained/informed were red herrings. It looks like it was an attempt to 
deflect blame on humans (either those that were supposed to inform 
pilots or those that were supposed to establish proper training 
procedures - from Boeing and individual airlines), in order to save the 
sanctity of AI deity. It all turned to be bs.


Pilots did everything Boeing deemed required to regain control:


As the jet began nose diving, the pilots "repeatedly" performed all emergency procedures 
provided by Boeing, the manufacturer, but they "were not able to control the aircraft,"

...

According to the sources, the pilots did not try to electronically pull the 
nose of the plane up before following Boeing's emergency procedures of 
disengaging power to the horizontal stabilizer on the rear of the aircraft. One 
source told ABC News they manually attempted to bring the nose of the plane 
back up by using the trim wheel. Soon after, the pilots restored power to the 
horizontal stabilizer.

With power restored, the MCAS was re-engaged, the sources said, and the pilots 
were unable to regain control before the crash.


So it's much worse than it looked like. Boeing designed automated 
machine controls which they (Boeing) did not understand. It had modes of 
operation unknown to its designers. This is inevitable - I'll repeat: 
INEVITABLE - when you have more than few thousands lines of code. There 
are no testing procedures to save you from this. There are only testing 
procedures to cover up your ass with legal compliance requirements.


This placement of complex automated control loops everywhere is starting 
to look like putting small nuclear reactors in homes, cars, schools, 
offices, etc., because it's cheaper than distributing gas and 
electricity, and hoping that sh*t won't happen. No, I'm wrong: they know 
that the sh*t will happen, but the calculation is that even after 
insurance pay-out and ephemeral PR damage it is still cheaper. The two 
recent disasters were allowed calculated risks.




#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-31 Thread Morlock Elloi

Didn't have to wait for long:

"Fake lane attack: ... Misleading the autopilot vehicle to the wrong 
direction ... we pasted some small stickers as interference patches on 
the ground in an intersection ... This kind of attack is simple to 
deploy, and the materials are easy to obtain. "


https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf

Note that this intervention was not on the vehicle, but on the environment.


On 3/17/19, 12:48, Morlock Elloi wrote:

Note that autonomous vehicles are becoming affordable assassination
instruments. It would cost a fortune a decade ago to create robotic


#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-30 Thread Heiko Recktenwald
Dear all, the days of "direct democracy" v. "the few" are over. The days
of a "movement" without "momentum". Why should the "vote" of "the
people" matter? All talk as the Don would say.


Am 30/03/19 um 22:20 schrieb John Young:
> Was it not long known all communication is pornographic? Otherwise
> nobody would be aroused to communicate while awaiting to fuck, or be
> fucked by, a warm body, bidding time just masturbating alone from tyke
> to tyrant.


Godards "She does not talk" comes to mind.


>
> As seen here, the lonely habitual digitalization, quickies or
> laborious. Googling oneself, maillisting, SMing, browsing, preaching,
> teaching, groveling, adoring and citing the momentarily greatest aloners.
>

It can come in many ways.


> Some senile frustrators argue its now, always had been, all kiddie
> porn practiced like Trump and Pope Francis, et al. Machines aid, abet
> and entice, hands on.


The church is a part of life. Reality is stronger than fiction.


H.




#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-30 Thread Morlock Elloi
Everything is already in place to properly regulate this space, except 
naming things for what they are.


All industrial/commercial activities that impact humans below cognitive 
levels (ie. directly biologically or by exploiting basic innate drives) 
are in general heavily regulated:


- sex (rent, lease or purchase)
- food supply
- air
- religious/cult indoctrination
- health/medicine

It is simple and recognized fact that prevalent machine interfaces 
provide artificial socializing stimuli and exploit ability to create 
biological addiction, in order to make money (either by advertizing or 
selling their hapless subjects to influences by the highest bidder.) 
Exploiting socializing drive, which in humans is rather prominent, is 
not different from exploiting the sex drive, and needs to be regulated 
as such.


Current discourse on this, basically porn industry, is ridiculous: it's 
as if regulating classic porn (can children view it or not, or can you 
put it on billboards) consisted of selecting and vetting actresses and 
actors that can perform in a sanctioned way, while banning other ones 
(dick too big/small, too fat/too skinny, minority status etc.) It 
doesn't matter: as long as dick entering pussy is shown, it's porn. Same 
for social media: as long as presence of strangers and interaction with 
them ('friends' in social parlance) is simulated, it's porn.


The main issue here is that, while powers that be cannot easily exploit 
classical porn (I'm sure they tried - references, anyone?), they can 
exploit 'strangers care about you' reflex titillation, so they *like* 
this type of porn and won't do anything about it.






On 3/30/19, 09:05, tbyfield wrote:

'innovation' is enabling around the world. The US has ironclad
regulations and norms about experimenting on human subjects, which are
enforced with brutal mania in academia. But, somehow, we haven't been
able to apply them to pretty much everything Silicon Valley does.
Instead, we get ridiculous kerfuffles about Facebook experimenting with
making people 'sad' or the tangle around Cambridge Analytica, which is
both real and borderline-paranoiac. The blurriness of that boundary is a
by-product of, if you like, the micro-epistemological divide that
separates general journalism and investigative journalism. We're
terrible at 'scaling' this kind of analysis or down: either from
subtract to concrete, by saying 'WTF is going on?!' and channeling it
into broad, effective limitations on what infotech companies can do, or
from concrete to abstract, by catching companies


#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-30 Thread tbyfield

On 29 Mar 2019, at 6:32, William Waites wrote:

It seems to me it is a question of where you draw the system boundary. 
If the
system is an aeroplane that is flying, then the recording device is 
not part of
the control loop and it is not a cybernetic tool in that context. If 
the system
is the one that adjusts and optimises designs according to successes 
and
failures, then the recording device definitely is part of the control 
loop and

it is a cybernetic tool.


This is where 'classical' cybernetics drew the line. Second-order 
cybernetics, which came later (late '60s through the mid/late '70s) and 
focused on the 'observing systems' rather than the 'observed systems,' 
drew that line differently. I don't have a solid enough grasp of the 
work of people like Heinz von Foerster and Gordon Pask to say with any 
certainty how and where they'd draw it, but in general their approach 
was more discursive and less, in a word, macho. So they'd be less 
interested in the isolated 'technical' performance of a single plane or 
a single flight and more interested in how people made sense of those 
technical systems — for example, through the larger regulatory 
framework that Scot spoke of: regular reviews of the data generated and 
recorded during every flight. Scot's note was a helpful reminder that 
the purpose of a black box is just to duplicate and store a subset of 
flight data in case every other source of info is destroyed. In that 
view, it doesn't matter so much that the black box itself is input-only, 
because it's just one component in a tangle of dynamic systems — 
involving humans and machines — that 'optimize' the flight at every 
level, from immediate micro-decisions by the flight staff to 
after-the-fact macro-analyses by the corporation, its vendors, 
regulatory agencies, etc. The only reason we hear about (or even know 
of) black boxes is that they fit neatly into larger cultural narratives 
that rely on 'events' — i.e., crashes. But we don't hear about these 
countless other devices and procedures when things go right. Instead, 
they just 'work' and disappear into the mysterious 'system.'


(As a side note, this brings us back to why Felix's overview of how 
different regimes contend with complexity is so stunning — 
'complexity' is a product of specific forms of human activity, not some 
mysterious natural force:


https://nettime.org/Lists-Archives/nettime-l-1903/msg00127.html

His message reminds me very much of what I love about Marshall Sahlins's 
work and, in a different way, of Moishe Postone's _Time, Labor, and 
Social Domination_: basically, 'complexity' is immanent.)


But back to my point: Morlock's original take about the Boeing 737 
crashes and how this thread unfolded, or at least to one of the areas 
where Brian and I seemed to part ways. It's easy to lose sight of the 
larger dimensions and implications of these human–machine assemblages. 
For example, media coverage very quickly focuses on detailed specialist 
subjects, like the design of the MCAS system that's failed on 737s; 
then, a few days later, it suddenly leaps to a totally different order 
and focuses on regulatory issues, like the US FAA's growing reliance on 
self-regulation by vendors. We've grown accustomed to this kind of 
non-narrative trajectory from countless fiascos; and we know what 
sometimes comes next, 'investigative journalism,' that is, journalism 
that delves into the gruesome technical details and argues, in essence, 
that these technical details are metonyms for larger problems, and that 
we can use them as opportunities for social action and reform of 'the 
system.'


This journalistic template has a history. I know the US, other nettimers 
will know how it played out in other regions and countries. A good, if 
slightly arbitrary place to start is Rachel Carson's 1962 book _Silent 
Spring_ and Ralph Nader's 1965 book _Unsafe at Any Speed_. (It isn't an 
accident that Carson's work opened up onto environmental concerns, 
whereas Nader's was more geeky in its focus on technology and policy: 
there's an intense gender bias in how journalism identifies 'issues.') 
From there, the bulk of ~investigative journalism shifted to militarism 
(i.e., Vietnam: defoliants like Agent Orange, illegal bombing 
campaigns), political corruption (Watergate), intelligence (mid-'70s: 
the Pike and Church committees looking into CIA abuses etc), nuclear 
power (Three Mile Island), military procurement, policy and finance 
(HUD, the S, etc), etc, etc. I've left out lots of stuff, but that's 
the basic drift, although these decades also saw an immense rise of 
investigative focus on environmental issues. Whether the results of all 
that environmental work have been satisfying I'll leave as an exercise 
for the reader.


That template goes a long way toward explaining how and why journalistic 
coverage of 'tech' is so ineffectual now. It can't get its arms around 
*the* two big issues: the extent to which the US has 

Re: rage against the machine

2019-03-29 Thread Balazs Bodo
Indeed,

And it is super interesting the see how it is impossible to understand the
complex economic, political, ideological processes that led to the breakdown
of the narrowly defined cybernetic system without first opening the black
box, which, on its face,  does nothing but records the data during the
flight. In that sense the traces of a breakdown of a narrow technical
control process open up the window to the breakdown of the social, economic,
political, cultural factors that shaped the development and conditions of
those technical processes, up to that point invisibly, or at least
indecipherably. 
Or, as is the case with this discussion, we don't even need the actual data,
the simple presence of that black box is enough for us to do some of the
forensic work. :)

Cheers,
b.-


> -Original Message-
> From: nettime-l-boun...@mail.kein.org [mailto:nettime-l-
> boun...@mail.kein.org] On Behalf Of William Waites
> Sent: Friday, March 29, 2019 11:32 AM
> To: Felix Stalder 
> Cc: nettime-l@mail.kein.org
> Subject: Re:  rage against the machine
> 
> > To my limited understanding, the black box in the airplane is not a
> > device to limit the complexity of the pilots' interaction with, or
> > understanding of, the plane by reducing a complex process to a simple
> > in/out relationship.
> >
> > No, it's a flight recorder. During the flight, it has no output at
> > all, and in no way influences the processes of flying. It simply
> > records certain signals, including voice signals.
> >
> > The plane would fly in exactly the same way if it wasn't there.
> >
> > In this sense, it's a forensic, not a cybernetic tool. And as that,
> > it's function is actually exactly the opposite. It's a tool designed
> > not to hide but to reveal complexity, to make transparent what happens
> > inside the cockpit.
> 
> It seems to me it is a question of where you draw the system boundary. If
> the system is an aeroplane that is flying, then the recording device is
not part
> of the control loop and it is not a cybernetic tool in that context. If
the system
> is the one that adjusts and optimises designs according to successes and
> failures, then the recording device definitely is part of the control loop
and it
> is a cybernetic tool.
> 
> Best wishes,
> -w

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-29 Thread William Waites
> To my limited understanding, the black box in the airplane is not a
> device to limit the complexity of the pilots' interaction with, or
> understanding of, the plane by reducing a complex process to a simple
> in/out relationship.
> 
> No, it's a flight recorder. During the flight, it has no output at all,
> and in no way influences the processes of flying. It simply records
> certain signals, including voice signals.
> 
> The plane would fly in exactly the same way if it wasn't there.
> 
> In this sense, it's a forensic, not a cybernetic tool. And as that, it's
> function is actually exactly the opposite. It's a tool designed not to
> hide but to reveal complexity, to make transparent what happens inside
> the cockpit.

It seems to me it is a question of where you draw the system boundary. If the
system is an aeroplane that is flying, then the recording device is not part of
the control loop and it is not a cybernetic tool in that context. If the system
is the one that adjusts and optimises designs according to successes and
failures, then the recording device definitely is part of the control loop and
it is a cybernetic tool.

Best wishes,
-w


signature.asc
Description: PGP signature
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-29 Thread Ana Viseu
Hello all,

I have been reading this thread with much interest even if, I am afraid I may 
have missed many of the nuances. 

I would agree with Felix when he says the airplane’s black box is a cybernetic 
device only to the extent that it translates all actions into information. 
Felix calls it a forensic device, that seems right, at least until a plane 
malfunctions or crashes. 

I would like to suggest that the “real” cybernetic device here is the software 
that Boeing designed to keep the plane in the air in the face of its poor 
aerodynamics. That software, a black box in the sense that it both takes all 
sorts of inputs and controls/manipulates outputs, is also a black box in the 
sense that its workings (and existence) was kept hidden from the pilots. 

This may have been said already but  what I find fascinating about this is that 
it posits the triumph of bits over atoms (to use MIT’s 90’s information age 
lexicon).  We have been walking in this direction for a long time - bodies and 
objects being upgraded with information processing abilities - but now software 
is brought along to counter the laws of physics that dictate that shifting the 
location of an airplanes’ engines changes its aerodynamics. 

It may well be that this is old news and I have simply not been paying enough 
attention but to me this seems both fascinating and scary. 

I would love to hear your thoughts. 

Ana



---/-/\-\---
Ana Viseu 
Associate Professor | Universidade Europeia 
Centro Interuniversitário de História das Ciências e Tecnologia | Univ. de 
Lisboa 
www.anaviseu.org

> On Mar 29, 2019, at 9:19 AM, Felix Stalder  wrote:
> 
> Thanks Ted, Scott and Morlock, this history is obviously more complex
> and nuanced than the point I was trying to make, which was not
> historical at all, but rather logical.
> 
> To my limited understanding, the black box in the airplane is not a
> device to limit the complexity of the pilots' interaction with, or
> understanding of, the plane by reducing a complex process to a simple
> in/out relationship.
> 
> No, it's a flight recorder. During the flight, it has no output at all,
> and in no way influences the processes of flying. It simply records
> certain signals, including voice signals.
> 
> The plane would fly in exactly the same way if it wasn't there.
> 
> In this sense, it's a forensic, not a cybernetic tool. And as that, it's
> function is actually exactly the opposite. It's a tool designed not to
> hide but to reveal complexity, to make transparent what happens inside
> the cockpit.
> 
> Just because there are procedural limits as to who is allowed to open
> the box, and therefor it's "black" to some people (the pilots, the
> airline technicians like Scott) doesn't make it a black box in the
> cybernetic sense. Otherwise, every safe would be a cybernetic black box.
> 
> And because it's not a cybernetic object, it's not a good object to talk
> about the problems of complexity and if/how we run a ever larger number
> of processes at or beyond the outer limits of complexity that we can
> manage. That was the only point I was trying to make.
> 
> But because Scott, who as detailed, first-hand knowledge of these
> things, agrees with the cybernetic reading to plane's black box, I might
> be mistaken here.
> 
> Felix
> 
> 
>> On 29.03.19 02:46, tbyfield wrote:
>> Not so fast, Felix, and not so clear.
>> 
>> The origins of the phrase black box are "obscure," but the cybernetics
>> crowd started using it from the mid-'50s. Their usage almost certainly
>> drew on electronics research, where it had been used on a few occasions
>> by a handful of people. However, that usage paled in comparison to the
>> phrase's use among military aviators from early/mid in WW2 — *but not
>> for flight recorders*. Instead, it described miscellaneous
>> electro-mechanical devices (navigation, radar, etc) whose inner workings
>> ranged from complicated to secret. Like many military-industrial objects
>> of the time, they were often painted in wrinkle-finish black paint.
>> Hence the name.
>> 
>> Designing advanced aviation devices in ways that would require minimal
>> maintenance and calibration in the field was a huge priority — because
>> it often made more sense to ship entire units than exotic spare parts,
>> because the devices' tolerances were too fine to repair in field
>> settings, because training and fielding specialized personnel was
>> difficult, because the military didn't want to circulate print
>> documentation, etc, etc. So those physically black boxes became, in some
>> ways, "philosophical" or even practical black boxes.
>> 
>> Several of the key early cyberneticians contributed to the development
>> of those devices at institutions like Bell Labs and the Institute for
>> Advanced Studies, and there's no doubt they would have heard the phrase.
>> In that context, the emphasis would have been on *a system that behaves
>> reliably even though ~users don't 

Re: rage against the machine

2019-03-29 Thread Felix Stalder
Thanks Ted, Scott and Morlock, this history is obviously more complex
and nuanced than the point I was trying to make, which was not
historical at all, but rather logical.

To my limited understanding, the black box in the airplane is not a
device to limit the complexity of the pilots' interaction with, or
understanding of, the plane by reducing a complex process to a simple
in/out relationship.

No, it's a flight recorder. During the flight, it has no output at all,
and in no way influences the processes of flying. It simply records
certain signals, including voice signals.

The plane would fly in exactly the same way if it wasn't there.

In this sense, it's a forensic, not a cybernetic tool. And as that, it's
function is actually exactly the opposite. It's a tool designed not to
hide but to reveal complexity, to make transparent what happens inside
the cockpit.

Just because there are procedural limits as to who is allowed to open
the box, and therefor it's "black" to some people (the pilots, the
airline technicians like Scott) doesn't make it a black box in the
cybernetic sense. Otherwise, every safe would be a cybernetic black box.

And because it's not a cybernetic object, it's not a good object to talk
about the problems of complexity and if/how we run a ever larger number
of processes at or beyond the outer limits of complexity that we can
manage. That was the only point I was trying to make.

But because Scott, who as detailed, first-hand knowledge of these
things, agrees with the cybernetic reading to plane's black box, I might
be mistaken here.

Felix


On 29.03.19 02:46, tbyfield wrote:
> Not so fast, Felix, and not so clear.
> 
> The origins of the phrase black box are "obscure," but the cybernetics
> crowd started using it from the mid-'50s. Their usage almost certainly
> drew on electronics research, where it had been used on a few occasions
> by a handful of people. However, that usage paled in comparison to the
> phrase's use among military aviators from early/mid in WW2 — *but not
> for flight recorders*. Instead, it described miscellaneous
> electro-mechanical devices (navigation, radar, etc) whose inner workings
> ranged from complicated to secret. Like many military-industrial objects
> of the time, they were often painted in wrinkle-finish black paint.
> Hence the name.
> 
> Designing advanced aviation devices in ways that would require minimal
> maintenance and calibration in the field was a huge priority — because
> it often made more sense to ship entire units than exotic spare parts,
> because the devices' tolerances were too fine to repair in field
> settings, because training and fielding specialized personnel was
> difficult, because the military didn't want to circulate print
> documentation, etc, etc. So those physically black boxes became, in some
> ways, "philosophical" or even practical black boxes.
> 
> Several of the key early cyberneticians contributed to the development
> of those devices at institutions like Bell Labs and the Institute for
> Advanced Studies, and there's no doubt they would have heard the phrase.
> In that context, the emphasis would have been on *a system that behaves
> reliably even though ~users don't understand it*, more than on *an
> object that's painted black*. Wartime US–UK cooperation in aviation was
> intense (the US used something like 80 air bases in the UK under the
> Lend–Lease program), so there was no shortage of avenues for slang to
> spread back and forth across the ocean. It's on that basis, a decade
> later, that Ross Ashby called a chapter of his 1956 book _Cybernetics_
> to "The Black Box." Given who he'd been working with, it's hard to
> imagine — impossible, I think — that he was unaware of this wider usage.
> (An exaggerated analogy: try calling someone looking at shop shelves a
> "browser.")
> 
> Some early aviators had come up with ad-hoc ways to record a few flight
> variables, but the first flight recorders as we now understand them
> started to appear around the mid-'50s. There's lots of folksy
> speculation about how these things — which weren't black and weren't
> box-shaped — came to be called "black boxes." I think the simplest
> explanation is best, even if it's the messiest: a combination of
> aviation slang and the fact that they were the state of the art when it
> came to sealed units. In the same way that the word "dark" clearly
> exerts some wide appeal (dark fiber, dark pools, dark web, dark money,
> etc), I think the idea of a "black box" held mystique — of a kind that
> would tend to blur sharp distinctions like the one you drew.
> 
> Anyway. Planes are interesting, but what led me down the path of
> studying these histories is what you point out — that the fusion of the
> pilot with the plane is an ur-moment in human–machine hybridization.
> 
> Cheers,
> Ted
> 
> 
> On 28 Mar 2019, at 14:48, Felix Stalder wrote:
> 
>> Let me just pick up on one point, because it kind of annoyed me since
>> the start the 

Re: rage against the machine

2019-03-28 Thread Scot Mcphee
On 29 March 2019 at 09:07:31, Morlock Elloi (morlockel...@gmail.com) wrote:

Seemingly totally unrelated:

1. flight recorders are brightly colored these days. The term "black
box" originates in WW2, mostly because the first flight recorders, as
all other "secret" electronics, was housed in metal boxes painted matte
black.

See
https://web.archive.org/web/20171019110346/http://siiri.tampere.fi/displayObject.do?uri=http%3A%2F%2Fwww.profium.com%2Farchive%2FArchivedObject-8077CE76-2B43-6FAA-D11C-77AAFD6C72E8


2. Schematic "black box", meaning circuitry/algorithm that is opaque and
not supposed to be seen or understood, and only I/O is available also
originates in WW2.

It's hilarious that #1 and #2 overlap again these days, as most airlines
have no capability of examining their own flight recorders, so we are
back to black boxes: Ethiopian Airlines refused to hand over their black
boxes to Americans, as they don't trust them. I


Hello, I want to weigh into this thread finally, after following it closely
these past weeks. I haven’t had the time to respond, as I’ve been busy with
my PhD dissertation, which I submit in just a few days. There have been so
many good posts on this thread, particularly one from Felix timestamped
2019-03-28 0900 UTC where he talks about the rise of complexity in systems.

First, I want to qualify the position from which I speak: I am a technical
expert who works for an airline. I’m not an aircraft engineer, but I design
and build what are called ‘operational systems,’ which provide the inputs
and outputs for the airline to safely fly it’s equipment. I’m not,
therefore a strictly disinterested party in such discussions, but I do
possess a certain amount of inside knowledge of airlines, and the often
cryptic systems we use and the language we use to describe them.

First, I want to talk about the ‘black’ box. Yes it's bright orange. No
that’s not why it’s spoken as a ‘black’ box. In fact the cybernetic
explanation is the correct one.

The airline is not supposed to be able to read the contents of these boxes.
That’s why they are ‘black’. The cockpit voice recordings and control data
flow into them and are used for _safety_ investigations, by a _safety_
authority, like CASA, or the FAA.

Why isn’t the airline supposed to read the content of the boxes? Because
the voice recorder is recording the pilots use of procedures which may be
designed by the airlines. Yes, every airline has slightly different
procedures, as long as these procedures are within the parameters of the
aircraft designers, and the responsible aviation regulator, all is
_supposed_ to be OK. But just as _technical_ failures, say with the pitot
tube icing on AF447, can cause technical systems to malfunction
(disconnecting the autopilot … although that’s not a ‘malfunction’ as
such), and ‘human factors’ (cockpit design of Airbuses) the company
procedures and culture can also cause or compound accidents. In the case of
AF447, there was a toxic culture among the pilots. They did not co-operate
smoothly. The senior pilot barged into the cockpit and basically bombarded
the two pilots at the controls with theories and questions. __nobody__
thought to ask the most junior pilot onboard, who was sitting in the right
hand seat at the time, if he had made any control inputs. He had … in fact
he performed the __worst__ possible input when faced with a stall warning:
he pulled the nose up. Anyway, the investigation found a cultural and
training issue, which AF had to fix.

However, just because we can’t read the boxes, doesn’t mean we don’t
monitor our aircraft. There are plenty of signals which the aircraft
transmits during flight, and a ton more which are downloaded from it when
it gets into the hands of the engineers. We have a entire department that
analyses this information, offline. If they find issues, the pilots are
asked to explain to the chief pilot (for the type) what happened: “why did
you exceed the type’s maximum recommended descent rate for over 30 seconds
last week flying VHxxx into YBBN on SMOKA 8 ALPHA to RWY 01LR between DAYBO
and GORRI?”.

Anyway I don’t have any great theoretical insights at the moment but a lot
of this discussion is interesting, and if someone has airline related
questions I’m happy to answer them.

Scot
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-28 Thread tbyfield

Not so fast, Felix, and not so clear.

The origins of the phrase black box are "obscure," but the cybernetics 
crowd started using it from the mid-'50s. Their usage almost certainly 
drew on electronics research, where it had been used on a few occasions 
by a handful of people. However, that usage paled in comparison to the 
phrase's use among military aviators from early/mid in WW2 — *but not 
for flight recorders*. Instead, it described miscellaneous 
electro-mechanical devices (navigation, radar, etc) whose inner workings 
ranged from complicated to secret. Like many military-industrial objects 
of the time, they were often painted in wrinkle-finish black paint. 
Hence the name.


Designing advanced aviation devices in ways that would require minimal 
maintenance and calibration in the field was a huge priority — because 
it often made more sense to ship entire units than exotic spare parts, 
because the devices' tolerances were too fine to repair in field 
settings, because training and fielding specialized personnel was 
difficult, because the military didn't want to circulate print 
documentation, etc, etc. So those physically black boxes became, in some 
ways, "philosophical" or even practical black boxes.


Several of the key early cyberneticians contributed to the development 
of those devices at institutions like Bell Labs and the Institute for 
Advanced Studies, and there's no doubt they would have heard the phrase. 
In that context, the emphasis would have been on *a system that behaves 
reliably even though ~users don't understand it*, more than on *an 
object that's painted black*. Wartime US–UK cooperation in aviation 
was intense (the US used something like 80 air bases in the UK under the 
Lend–Lease program), so there was no shortage of avenues for slang to 
spread back and forth across the ocean. It's on that basis, a decade 
later, that Ross Ashby called a chapter of his 1956 book _Cybernetics_ 
to "The Black Box." Given who he'd been working with, it's hard to 
imagine — impossible, I think — that he was unaware of this wider 
usage. (An exaggerated analogy: try calling someone looking at shop 
shelves a "browser.")


Some early aviators had come up with ad-hoc ways to record a few flight 
variables, but the first flight recorders as we now understand them 
started to appear around the mid-'50s. There's lots of folksy 
speculation about how these things — which weren't black and weren't 
box-shaped — came to be called "black boxes." I think the simplest 
explanation is best, even if it's the messiest: a combination of 
aviation slang and the fact that they were the state of the art when it 
came to sealed units. In the same way that the word "dark" clearly 
exerts some wide appeal (dark fiber, dark pools, dark web, dark money, 
etc), I think the idea of a "black box" held mystique — of a kind that 
would tend to blur sharp distinctions like the one you drew.


Anyway. Planes are interesting, but what led me down the path of 
studying these histories is what you point out — that the fusion of 
the pilot with the plane is an ur-moment in human–machine 
hybridization.


Cheers,
Ted


On 28 Mar 2019, at 14:48, Felix Stalder wrote:


Let me just pick up on one point, because it kind of annoyed me since
the start the thread, the significance of the the existence of a 
"black

box" in the airplane and in cybernetic diagrams. To the best of my
understanding, these two "black boxes" stand in no relation to each 
other.


In the case of the black box in cybernetics, it stands for a
(complicated) processes of which we only (need to) know the 
relationship

between input and output, not its inner workings. In the case of the
airplane, the it's just a very stable case protecting various 
recorders

of human and machine signals generated in the cockpit. There is no
output at all, at least not during the flight.

There is, of course, a deep connection between aviation and 
cybernetics,

after all, the fusion of the pilot with the plane was the earliest
example of a system that could only be understood as consisting humans
and machines reacting to each other in symbiotic way. So, the main
thrust of the thread, and the rest of your post, are interesting, this
little detail irks me.


#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-28 Thread Morlock Elloi

Seemingly totally unrelated:

1. flight recorders are brightly colored these days. The term "black 
box" originates in WW2, mostly because the first flight recorders, as 
all other "secret" electronics, was housed in metal boxes painted matte 
black.


See 
https://web.archive.org/web/20171019110346/http://siiri.tampere.fi/displayObject.do?uri=http%3A%2F%2Fwww.profium.com%2Farchive%2FArchivedObject-8077CE76-2B43-6FAA-D11C-77AAFD6C72E8


2. Schematic "black box", meaning circuitry/algorithm that is opaque and 
not supposed to be seen or understood, and only I/O is available also 
originates in WW2.


It's hilarious that #1 and #2 overlap again these days, as most airlines 
have no capability of examining their own flight recorders, so we are 
back to black boxes: Ethiopian Airlines refused to hand over their black 
boxes to Americans, as they don't trust them. Instead they gave them to 
the French (this really existing trust hierarchy is getting 
interesting.) For Ethiopian Airlines, the brightly colored flight 
recorders are true black boxes: the input was something their aircraft 
generated, and the output is something that French will generate. 
Ethiopian Airlines doesn't get to understand the rest.


So term "black box" is fully justified and interchangeable with "flight 
recorder", in true schematic sense.




On 3/28/19, 11:48, Felix Stalder wrote:

Let me just pick up on one point, because it kind of annoyed me since
the start the thread, the significance of the the existence of a "black
box" in the airplane and in cybernetic diagrams. To the best of my
understanding, these two "black boxes" stand in no relation to each other.


#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-28 Thread Felix Stalder


On 28.03.19 16:38, tbyfield wrote:

> Yes and no. In theory, plane crashes happen out in the open compared to
> other algorithmic catastrophes. In practice, the subsequent
> investigations have a very 'public secret' quality: vast expanses are
> cordoned off to be combed for every fragment, however minuscule; the
> wreckage is meticulously reconstructed in immense closed spaces;
> forensic regimes — which tests are applied to what objects and why — are
> very opaque. And, last but not least, is the holy grail of every plane
> crash, the flight recorder. Its pop name is itself a testament to the
> point I made earlier in this this thread about how deeply cybernetics
> and aviation are intertwingled: the proverbial 'black box' of
> cybernetics became the actual *black box* of aviation. But, if anything,
> its logic was inverted: in cybernetics the phrase meant a system that
> can be understood only through its externally observable behavior, but
> in aviation it's the system that observes and records the plane's behavior.
> 
> Black boxes are needed because, unlike car crashes, when planes crash
> it's best to assume that the operators won't survive. That's where the
> 'complexity' of your sweeping history comes in.

Let me just pick up on one point, because it kind of annoyed me since
the start the thread, the significance of the the existence of a "black
box" in the airplane and in cybernetic diagrams. To the best of my
understanding, these two "black boxes" stand in no relation to each other.

In the case of the black box in cybernetics, it stands for a
(complicated) processes of which we only (need to) know the relationship
between input and output, not its inner workings. In the case of the
airplane, the it's just a very stable case protecting various recorders
of human and machine signals generated in the cockpit. There is no
output at all, at least not during the flight.

There is, of course, a deep connection between aviation and cybernetics,
after all, the fusion of the pilot with the plane was the earliest
example of a system that could only be understood as consisting humans
and machines reacting to each other in symbiotic way. So, the main
thrust of the thread, and the rest of your post, are interesting, this
little detail irks me.

Felix



-- 
  http://felix.openflows.com
 |Open PGP   http://pgp.mit.edu/pks/lookup?search=0x0bbb5b950c9ff2ac



signature.asc
Description: OpenPGP digital signature
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-28 Thread Morlock Elloi

The basic issue is complexity crossing the threshold that humans cannot.

So far, at least in the last few thousand years or so, mental abilities 
were one of key factors for individual 'success' (the other, likely more 
important one, was class and heritage.) We appreciate smart people as 
much as the rich ones. In the last few decades there was acceleration of 
cognitive stratification, as the class of more-than-average smart 
technicians was needed to tend to more and more complex computing machines.


Today it's obvious that the system of controlling capital/power, the 
Roman Guard of technicians, and computing machinery itself is 
practically ruling the world. Yet we can see smart people behind it so 
at least we can map the new order into familiar space, where smart 
people, evil, good, or just sociopathic, are at the helm.


What happens when the machine complexity surpasses the human cognition? 
Skynet aside, the most dire effect is that the smart Roman Guard becomes 
redundant. Instead, it will be the inbred, semi-retarded ruling 
oligarchy, some 30-40,000 families on the planet, that will have this 
miracle machinery in its lap. Like a chimp that got hold of unlimited 
supply of AK-47s. It's not going to be sophisticated, it's going to be 
ugly. The final disintermediation. The heritage becomes the sole factor. 
Smartness is out.


These things, societies optimizing themselves out of existence, happened 
before in different forms. Easter Island rulers liked those statues so 
much that they depleted all natural resources in building them, 
destroying the whole society in the process.


The chimp logic is dead simple. It's a total waste of time theorizing 
and philosophizing about it. All that just buys them more time.





On 3/28/19, 08:38, tbyfield wrote:

That's why criticisms of the 'complexity' of increasingly automated and
autonomized vehicles are a dead end, or at least limited to two
dimensions. I liked it very much when you wrote that "the rise in
complexity in itself is not a bad thing"; and, similarly, giving up
autonomy is not in itself a *bad* thing. The question is where and how
we draw the lines around autonomy. The fact that some cars will fly
doesn't mean that every 'personal mobility device' — say, bicycles —
needs to be micromanaged by a faceless computational state run amok. Yet
that kind of massive,


#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-28 Thread tbyfield
Felix, this is really interesting. Normally, I'm allergic to sweeping 
models of history that involve anything like 'technology' or 
'technology,' because they mostly serve as playgrounds for wannabe TED 
talkers. Yours is different — maybe, in part, because you don't assume 
that capitalism and computation play well together.


You wrote:


In the case of the plane crash, it's just out in the open, like in the
case of a massive stock market crash. The difference is only that in 
the

case of the plane crash, the investigation is also out in the open,
while in virtually all other cases, the investigation remains closed 
to

outsiders, to the degree that there is even one.


Yes and no. In theory, plane crashes happen out in the open compared to 
other algorithmic catastrophes. In practice, the subsequent 
investigations have a very 'public secret' quality: vast expanses are 
cordoned off to be combed for every fragment, however minuscule; the 
wreckage is meticulously reconstructed in immense closed spaces; 
forensic regimes — which tests are applied to what objects and why — 
are very opaque. And, last but not least, is the holy grail of every 
plane crash, the flight recorder. Its pop name is itself a testament to 
the point I made earlier in this this thread about how deeply 
cybernetics and aviation are intertwingled: the proverbial 'black box' 
of cybernetics became the actual *black box* of aviation. But, if 
anything, its logic was inverted: in cybernetics the phrase meant a 
system that can be understood only through its externally observable 
behavior, but in aviation it's the system that observes and records the 
plane's behavior.


Black boxes are needed because, unlike car crashes, when planes crash 
it's best to assume that the operators won't survive. That's where the 
'complexity' of your sweeping history comes in.


Goofy dreams of flying cars have been a staple of pop futurism since the 
1950s at least, but until very recently those dreams were built on the 
basis of automobiles — and carried a lot of cultural freight 
associated with them, as if it were merely a matter of adding a third 
dimension to their mobility. But that dimension coincides with the axis 
of gravity: what goes up must come down. The idea that flying cars would 
be sold, owned, and maintained on an individual basis, like cars, 
implies that we'd soon start seeing the aerial equivalent of beat-up 
pickups flying around — another staple of sci-fi since the mid-'70s. 
It won't happen quite like that.


When cars crash the risks are mainly limited to the operators; when 
planes crash the risks are much more widespread — tons of debris 
scattered randomly and *literally* out of the blue. That kind of danger 
to the public would justify banning them, but of course that won't 
happen. Instead, the risks will be managed in ways you describe well: 
"massive computation to cope, not just to handle 'hardware flaws', but 
to make the world inhabitable, or to keep it inhabitable, for 
civilization to continue."


The various forms of 'autonomization' of driving we're seeing now are 
the beginnings of that transformation. It'll require fundamentally 
different relations between operators and vehicles in order to achieve 
what really matters: new relations between *vehicles*. So, for example, 
we're seeing semi-cooperative procedures and standards (like Zipcar), 
mass choreographic coordination (like Waymo), the precaritizing 
dissolution of 'ownership' (like Uber); GPS-based wayfinding and 
remora-sensors (everywhere) and the growing specter of remote control by 
police forces. None of these things is entirely new, but their 
computational integration is. And as these threads converge, we can 
begin to see a more likely future in which few if any own, maintain, or 
even 'drive' a car — we just summon one, tell it our destination, and 
'the cloud' does the rest. Not because this realizes some naive dream of 
a 'frictionless' future, but because the risks of *real* friction 
anywhere above 50 meters off the ground are too serious. And, in 
exchange for giving up the autonomy associated with automobiles, we'll 
get to live.


That's why criticisms of the 'complexity' of increasingly automated and 
autonomized vehicles are a dead end, or at least limited to two 
dimensions. I liked it very much when you wrote that "the rise in 
complexity in itself is not a bad thing"; and, similarly, giving up 
autonomy is not in itself a *bad* thing. The question is where and how 
we draw the lines around autonomy. The fact that some cars will fly 
doesn't mean that every 'personal mobility device' — say, bicycles — 
needs to be micromanaged by a faceless computational state run amok. Yet 
that kind of massive, hysterical, categorical drive to control has been 
a central feature of the rising computational state for decades.


The system that has worked for the last 40 years is reaching the 
limits

of the complexity it can handle. The externalities 

Re: rage against the machine

2019-03-28 Thread Felix Stalder
On 24.03.19 14:28, Florian Cramer wrote:

> Travis suggests that the 737 MAX fiasco resulted from a combination of
> market economics/cost-optimization management and software 
> being used to correct hardware design flaws.

Yes. I think there are several factors involved that are in fact
indicative of a wider techno-political condition, it's just that in the
case of a plane crash, the effects and the investigation are
particularly public. I'm pretty sure, the set of problems involved here
is very common, it includes:

a) Lax oversight. A massive shift from government (aka public interest,
at least in theory) regulation to industry self-regulation. This is an
effect, as well as a cause of, the power shift between the two poles.

b) Consequently, the narrow interest of corporate actors (cost-cutting,
profitability, short-termness etc) dominate the equation of incentives.

c) Massive rise in complexity that increases the importance of
computation as a way of managing the resulting dynamics.

These three elements are, basically, the ingredients of the system of
neo-liberal globalization. And the most important aspect of this story
was that it has worked, not the least by being able to marginalize all
other systems over the last 40 years.

It's important to remember where this system came from, and here I keep
thinking of Castells brilliant analysis of the crises of
"industrialism", aka Fordism, in the late 1960s, early 1970s, which
occurred both in capitalist and socialist countries. The reason, so
Castells, was that Fordism as a mode of organization had reached the
internal limit of complexity it could handle. It was no longer able to
cope with the increasingly diverse and more rapidly changing demands and
pressures that characterized the socio-technical (and ecologic)
environment which it was supposed to organize. The Soviets went into 20
years of stagnation (basically, the era of Brezhnev) while the
capitalist countries went on a contentious processes of organizational
change, that reached its pitting point when Thatcher and Reagan came to
power.

By returning basically to a Hayekian notion of the market as a superior
information processors, by reducing the complexity of lived-experience
to the price signal, it emerged a system that was able to cope with, and
rapidly expand, the rising complexity of society.

The Soviet Union fell apart when belatedly trying to embark on similar
reforms (similar not in the sense of neoliberal, but in the sense of
acknowledging the rising complexity of society, by, for example,
recognizing the existence of civil society).

So, fast forward to today. I think we are witnessing a similar moment of
stasis, this time in the West.  The term gridlock describes both the UKt
as well as the US experience and the attempts to break through it are
seriously damaging the system. In a way, it's like punishing workers in
a dysfunctional system for not meeting their targets. It deepens the
dysfunction.  The EU is probably not far behind.

The system that has worked for the last 40 years is reaching the limits
of the complexity it can handle. The externalities produced by the
radical reduction of the lived experience to price signals are coming
back to haunt the system, which has no way of coping with it. The
attempts to put a price on "bads", say in the case of cap'n'trade have
failed. And similarly, the attempts to save the climate are failing.

The rise in complexity in itself is not a bad thing. Historically, as
far as I can tell, a reduction in complexity has always meant a
breakdown of civilization. That may well be in the offing, but that's
not a good thing.

But that also means that we need massive computation to cope, not
just to handle "hardware flaws", but to make the world inhabitable, or
to keep it inhabitable, for civilization to continue.

The problem, I think, is the combination of two massively reductionist
systems, that of price signals and that of digital simulation, that
cannot account for the complexity of the effects they produce and hence
generate all kinds of "black swans".

In the case of the plane crash, it's just out in the open, like in the
case of a massive stock market crash. The difference is only that in the
case of the plane crash, the investigation is also out in the open,
while in virtually all other cases, the investigation remains closed to
outsiders, to the degree that there is even one.


Felix



-- 
  http://felix.openflows.com
 |Open PGP   http://pgp.mit.edu/pks/lookup?search=0x0bbb5b950c9ff2ac





signature.asc
Description: OpenPGP digital signature
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-26 Thread Brian Holmes
On Tue, Mar 26, 2019 at 12:19 PM tbyfield  wrote:

> I have some vague idea that over the
> last several decades a few people spent some time thinking about the
> history and philosophy of punishment. In nettimish contexts (as opposed
> to ground-level activism in judicial and penal fields), most of that
> thought was applied to critiques of punishment — certainly more than
> to imagining new and maybe even constructive ways to address the scale
> and complexity of corporate criminality.
>

To me this is totally interesting. In Chicago I am surrounded with
abolitionists whose work I cannot but respect: they have closed down a
supermax prison, attained reparations for people imprisoned on the basis of
confessions extracted under torture, they're creating an official monument
on the torture issue and a module of public curriculum to be used in the
city schools, plus many other things. Real achievements with national
influence, far more important than anything I have ever been directly
involved in. Yet I am convinced that abolitionism can only achieve sectoral
victories, not structural ones, because a mass urbanized capitalist society
with deep alienation needs the rule of law and the corresponding
instruments of behavioral control. It does not need the prisons of poverty
and the enforcement of "the new Jim Crow" that we have now; but these
things cannot be gotten rid of without proposing new structural devices.
"Community" cannot simply replace "society," to quote a dead European
theorist (Tonnies). Redesigning the prisons for the people who actually
commit the significant crimes is an idea with a future.

It took me a while to understand what's at stake in this thread, because of
what I continue to think of as the exceptionally poor language involved
(I'm with Andreas on that one). When the point moves from an unfocused
critique of computation to a demand to change specific aspects of
government, then I am all ears. I do not have any interest in being the
philosopher of an abstractly righteous anger - it's a common enough
position, but there you are speaking to someone else. No problem. There is
plenty of real anger to go around. The point - your point, as I understand
it - is to learn, pragmatically not just theoretically, how that anger can
be focused into politics with consequences. That's begun by continuing the
dialogue and dialing down the insults, which is the trend I am detecting
and trying participate in.

onward, Brian
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-26 Thread tbyfield

On 26 Mar 2019, at 1:15, Brian Holmes wrote:

Despite Ted's excursions into aviation history, which at least he 
finds

brilliant,  plus the general manly readiness to cut the throat of, one
doesn't know exactly whom, we have gotten no further in terms of
understanding the situation than what you have transcribed. It's still
about a badly designed plane "fixed" by a cybernetic patch, in a quest 
for

profit that knows no bounds.


"excursions into __ history, which at least he finds brilliant" 
seems like a pretty fair description of your own often-lengthy 
contributions to the list, Brian. Many of them are interesting, and I 
admire your commitment to untangling and reweaving disparate postwar 
intellectual and institutional threads. We need much more of that, in 
the US especially. But like your work with Bureau d'études, the value 
of those broad sweeps breaks down where the rubber meets the road, or in 
the case of aviation where somewhere between aerodynamics and 
instrumentation. Which is why, I guess, after "looking for something 
analogous in discursive spaces like this one," you've abruptly 
rediscovered the importance of the specific problem. But, as I described 
in some detail, cybernetic thought has been baked into aviation for 
decades. If anything, it's the other way around: aviation-related 
research was baked into cybernetics even as that new 'science' was being 
invented: some of the key players were working on applied problems 
brought into focus by aviation, ranging from fire control, to various 
applications of radio, to mission planning. So it's not a patch, it's 
the entire premise of how that industry works on almost every level. 
Fixing this one problem in a more sane, humane way would do nothing to 
resolve the countless areas where dilemmas with similar origins or 
structures *will* arise. And much as aviation served as one of the main 
vectors for distributing that style of thought globally, reforming some 
of the field's dominant design philosophies could do so as well.


As for slitting the throats of "one doesn't know exactly whom," no. I 
wrote:


And that begs an important question that leftoids aren't prepared to 
answer because, in a nutshell, they're allergic to power: what *would* 
be appropriate punishments for people who, under color of corporate 
activity, engage in indiscriminate abuses of public trust.


Andreas argue that long prison terms are good enough. That answer is 
easy, because it has the patina of history. But it ignores the disparate 
real conditions in prisons, which — in many contexts leftists would 
agree — are far from good enough. I have some vague idea that over the 
last several decades a few people spent some time thinking about the 
history and philosophy of punishment. In nettimish contexts (as opposed 
to ground-level activism in judicial and penal fields), most of that 
thought was applied to critiques of punishment — certainly more than 
to imagining new and maybe even constructive ways to address the scale 
and complexity of corporate criminality. Caricaturing people who'd say 
we should think about that as "manly" throat-slitters is dishonest and 
dumb. But my larger point was that systematic reform will require 
dismantling corporate mechanisms for obfuscating and escaping individual 
culpability. So, when you say...



How to express a necessary anger in a way that increases both people's
willingness and actual capacity to act politically? It's the 
unanswered

question I take away from the thread.


...I'd suggest that you start with the anger that's in front of you 
rather than invoking some romantic notion of diffuse righteous anger so 
you can position yourself as its philosopher. I offered at least one 
concrete answer: the labor activism of flight attendant unions, which I 
think has forced the Trump administration to do an about-face twice. 
There are others avenues, but finding them may require some excursions 
into 'aviation history.' If you aren't willing to do that, or at least 
to respect it, you won't get anywhere beyond unanswered questions.


Cheers,
Ted

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-25 Thread Brian Holmes
On Sun, Mar 24, 2019, 8:29 AM Florian Cramer  wrote:

>
> Travis suggests that the 737 MAX fiasco resulted from a combination of
> market economics/cost-optimization management and software
> being used to correct hardware design flaws.
>

Thank you, Florian, for explaining with the Travis text exactly what I
described in my first post on this thread, which also included the article
on MACS from Air Currents.

Despite Ted's excursions into aviation history, which at least he finds
brilliant,  plus the general manly readiness to cut the throat of, one
doesn't know exactly whom, we have gotten no further in terms of
understanding the situation than what you have transcribed. It's still
about a badly designed plane "fixed" by a cybernetic patch, in a quest for
profit that knows no bounds.

Everyone would like to see a change in the disdain which corporate
organizations show for the lives of the public, and this is a typical case,
comparable to Facebook selling your personal data or BP polluting entire
oceans. I agree any change will require mass anger and a threat of
violence. However, Trump's people have that in spades, they're angry about
many of the same things, and I don't think their rage is going to create
any change whatsoever. Instead the sputtering verbal violence covers a
concerted march forward in exactly the same direction.

How to express a necessary anger in a way that increases both people's
willingness and actual capacity to act politically? It's the unanswered
question I take away from the thread.

Brian

>
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-24 Thread mp


On 24/03/2019 06:29, Joseph Rabie wrote:
> I find it noteworthy that the call to burn at the stake, opposed by
> Andreas, has been endorsed by male members of this list. Let us remember
> that it was invented as retribution against women accused of witchcraft,
> that is to say practises considered subversive by the male theocratic
> power structure of the time.
> 
> Burning at the stake was crude technology. The 20th century variant,
> crematoria and gas chambers, were far more efficient.
> 
> Language is never just language.

Indeed, and this is also rhetorical. Burning people wasn't 'invented' to
deal with witchy women in recent times, but at least dates back to the
earliest law code - Code of Hammurabi - that focused on punishment of
the perpetrator, rather than sorting out the victim. Treason and
property related issues were chief among the first written laws.

It seems also as if it was/is a pretty universal method of castigation
that is still widespread. In the Amazon, for instance, many communities
have a sign at the entrance reading something to the effect of: "Ladrón
cogido, Ladrón quemado!"

Also, Bruno might feel left out if stakes were attributed to women-only
punishment.

At any rate, humanity is at a crossroads that this debate signifies
well: For how long shall we play the docile, civilised subjects that
speak nicely and accept things, then go vote every now and then to feel
involved and part of 'change'?

It is curious how 'civilised' - which means deprived of your autonomous
subsistence and condemned to life as a wage slave bound in debt and
taxes - is used as something to aspire to, like it was a value system
worth preserving.

Stockholm Syndrome.

Appeasement.

.. and other forms of freedom lost.
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-24 Thread Patrick Lichty
Agreed.
Peace and Hope from Abu Dhabi.

-Patrick Lichty
> On Mar 23, 2019, at 2:54 PM, Andreas Broeckmann  
> wrote:
> 
> friends, call me over-sensitive, but i think that nobody should be burned at 
> the stake for anything in any country; i say this also because this flippant 
> kind of rhetoric poisons the reasonable debate that is so urgently needed on 
> the matters at issue here. (to the contrary, i am glad that some civilised 
> countries find forms of punishment other than that for actual wrongdoing.) - 
> unfortunately, in a world where people get imprisoned and killed for all 
> sorts of things, there is little room for such dark humour... when all the 
> stakes have been taken down everywhere, we'll be able to laugh about this 
> joke again, perhaps.
> -a
> 
> 
> Am 23.03.19 um 05:46 schrieb Keith Hart:
>> "There is no excuse for such criminal product packaging. Anyone doing it or 
>> defending it should be burned at stake in any civilized country. The fact 
>> that it will not happen is the best statement about the times we live in."
>>  I agree. Thank you for the clarity of your writing in this thread, much the 
>> best that I have seen on this subject.
> #  distributed via : no commercial use without permission
> #is a moderated mailing list for net criticism,
> #  collaborative text filtering and cultural politics of the nets
> #  more info: http://mx.kein.org/mailman/listinfo/nettime-l
> #  archive: http://www.nettime.org contact: nett...@kein.org
> #  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
> 

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-24 Thread Florian Cramer
Having zero knowledge of airplane technology, I do not know whether the
following writeup/opinion piece on the 737 Max is a trustworthy source or
not.
It was written by a software developer (that I could verify) named Gregory
Travis who claims to have been a "pilot and aircraft owner for over thirty
years"
and who blogged on airplane engineering in the past:
https://drive.google.com/file/d/1249KS8xtIDKb5SxgpeFI6AD-PSC6nFA5/view

Travis suggests that the 737 MAX fiasco resulted from a combination of
market economics/cost-optimization management and software
being used to correct hardware design flaws.

Here's an extensive, selective quote from this document:

> "Over the years, market and technological forces pushed the 737 into
larger versions with more electronic
> and mechanical complexity. This is not, by any means, unique to the 737.
All
> airliners, enormous capital investments both for the industries that make
them as well as
> the customers who buy them, go through a similar growth process.
> The majority of those market and technical forces allied on the side of
economics, not safety.
> They were allied to relentlessly drive down what the industry calls
'seat-mile costs' – the cost of flying a seat from one point to another."
>
> To improve capacity and efficiency (I'm still paraphrasing the document),
engines had to become physically larger:
> "problem: the original 737 had (by today’s standards) tiny little engines
that easily cleared the ground beneath the wings. As the 737 grew and was
fitted with bigger engines, the
> clearance between the engines and the ground started to get a little,
umm, 'tight.' [...]
>
> With the 737 MAX the situation became critical. [...] The solution was to
extend the engine up and well in front of the wing. However,
> doing so also meant that the centerline of the engine’s thrust changed.
Now, when the pilots applied power to the
> engine, the aircraft would have a significant propensity to 'pitch up' –
raise its nose. [...]
>
> Apparently the 737 MAX pitched up a bit too much for comfort on power
application as well as
> at already-high-angles-of-attack. It violated that most ancient of
aviation canons and probably
> violated the FAA’s certification criteria. But, instead of going back to
the drawing board and
> getting the airframe hardware right (more on that below), Boeing’s
solution was something
> called the 'Maneuvering Characteristics Augmentation System,' or MCAS.
> Boeing’s solution to their hardware problem was software."

Software that didn't work as expected.

- By itself, this story doesn't sound new, but (particularly to European
readers) like a flashback from more than twenty years ago
when Mercedes botched the aerodynamic design of its "A series" car (its
first entry into the compact car segment) and corrected it with
computerized Electronic Stability Control (ESC/ESP), a textbook example of
a cybernetic feedback-and-control system based on sensors and software.

Here is an article that explains the basics of Boeing's MCAS system, which
sounds similar to ESC/ESP indeed:
https://theaircurrent.com/aviation-safety/what-is-the-boeing-737-max-maneuvering-characteristics-augmentation-system-mcas-jt610/
(For lay people like me, the surprising bit was that MCAS "activates
automatically when [...] autopilot is off".)

-F

-- 
blog: *https://pod.thing.org/people/13a6057015b90136f896525400cd8561
*
bio:  http://floriancramer.nl
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-24 Thread Joseph Rabie
Dear all,

I find it noteworthy that the call to burn at the stake, opposed by Andreas, 
has been endorsed by male members of this list. Let us remember that it was 
invented as retribution against women accused of witchcraft, that is to say 
practises considered subversive by the male theocratic power structure of the 
time.

Burning at the stake was crude technology. The 20th century variant, crematoria 
and gas chambers, were far more efficient.

Language is never just language.

Joe.



> Le 24 mars 2019 à 11:49, Menno Grootveld  a écrit :
> 
> Friends, I would say that to be caught in a plane that is nosediving (and 
> apparently burning before it hit the ground) because of some outright 
> criminal behaviour by some idiots that think they can get away with not 
> fixing the bugs in their software and thereby putting the lives of their 
> passengers at risk is about the closest thing nowadays to being burned at the 
> stake, so no, I would not call it 'flippant rhetoric'. Actually I would say 
> we need more of this 'flippant rhetoric' to counter the threat of the extreme 
> right AND their puppet masters in the neo-liberal establishment. I'm afraid 
> it's about time we become a little but less 'reasonable'...

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: two 'meta' notes (was Re: rage against the machine)

2019-03-24 Thread Andreas Broeckmann

ted,

i'm ready to call this a disagreement and to leave it at that: you say 
that it is my remark that "misdirects [the discussion] away from what 
matters most"; and i, to the contrary, think that it is morlock's 
"figure of speech" that misdirects the attention from what a civilised 
and moral response might be, and that's what i tried to say: mind your 
language.


i guess that using "burning at the stake" as a figure of speech is 
easier done if you don't imagine someone actually getting hanged, or 
shot, or killed in the gas, or the like; if, like me, you think that 
these are not such unlikely options to happen literally, not 
figuratively, the air smells different; and that may be due, as you and 
i infer, to being "over-sensitive".


i have no idea why you impute that, because of this sensitivity, i might 
be allergic to power; we can discuss the Nuremberg or Srebrenica or NSU 
trials over a drink some day, but generally, i believe that long prison 
sentences (or the prospect thereof) can already do a lot, and, for me, 
we must not respond "proportionally" to crimes.


i completely agree with you that people working in any industry should 
be held accountable for their decisions and deeds (even if the damage 
done is sometimes immeasurable); even the serious threat of being taken 
to court over the marketing tricks that morlock described in his msg, 
would perhaps (as you also suggest) help to alleviate things.


finally, even more than i fear people who make flippant remarks about 
burning others at the stake, i am afraid of those who think that 
"broad-base popular support" is a confirmation for anything. but then 
that's just me.


i guess the message of this missive is: don't let your language be 
carried away by your anger, even if that anger is due.


be safe,
-a



Am 23.03.19 um 18:00 schrieb tbyfield:

(2)

On 23 Mar 2019, at 6:54, Andreas Broeckmann wrote:

friends, call me over-sensitive, but i think that nobody should be 
burned at the stake for anything in any country; i say this also 
because this flippant kind of rhetoric poisons the reasonable debate 
that is so urgently needed on the matters at issue here. (to the 
contrary, i am glad that some civilised countries find forms of 
punishment other than that for actual wrongdoing.) - unfortunately, in 
a world where people get imprisoned and killed for all sorts of 
things, there is little room for such dark humour... when all the 
stakes have been taken down everywhere, we'll be able to laugh about 
this joke again, perhaps.


Andreas, you're over-sensitive. Much as Brian's flight into abstraction 
misdirected discussion away from concrete facts and struggles, your 
focus on the brutality of Morlock's remark — which I'm pretty sure was a 
figure of speech, not a specific advocacy for burning at the stake over 
drawing and quartering or crucifixion — misdirects it away from what 
matters most: penetrating the corporate veils that limit liability. If 
multinational corporate sovereignty is to be a key part of the new 
global regime, we need concrete strategies for isolating and punishing 
corporate criminality. Boeing's reputation has suffered: another 
airline, Garuda, canceled a $6B order for ~50 737s, and more are likely 
to follow. But minimizing shareholder value isn't enough. We need 
regulatory systems with teeth as sharp as those used in war-crimes 
tribunals. Polite anti-corporate rhetoric won't change anything, but 
identifying specific culprits within corporations and making them pay 
dearly for their crimes will change everything. Best of all, it can be 
applied to other imponderables like massive-scale fraud, environmental 
degradation, arms manufacture, abuses of privacy, and all the rest. For 
that reason, it *will* have broad-base popular support, sooner or later. 
The first question is what will finally trigger it, and the second 
question is whether we've laid solid groundwork for effective 
progressive responses.


And that begs an important question that leftoids aren't prepared to 
answer because, in a nutshell, they're allergic to power: what *would* 
be appropriate punishments for people who, under color of corporate 
activity, engage in indiscriminate abuses of public trust.


Cheers,
Ted

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

two 'meta' notes (was Re: rage against the machine)

2019-03-23 Thread tbyfield

(1)

On 18 Mar 2019, at 22:24, Brian Holmes wrote:

Ted, I like how you look at disputes from all sides, both for the 
intrinsic
interest of the meta-discussion, and because you put a finger on the 
very
existence of the dispute. For me it boils down to the old question 
about
critique, what it is, how it works, why anyone would engage in such a 
thing.

 <...>

thanks for the meta, Brian


Brian, to put it more bluntly, when it comes to critical discussions of 
aviation forensics, you are — by your own standards — out of your 
depth and in the same boat as the legions of just-add-water experts who 
opine on every subject that's trending on social media. I'm hardly an 
expert, but I have spent years reading widely about how aviation has 
reconstructed humanity at every level, from the cognition of 
instrumentation design, to the history of crash-test dummies, to 
divergent philosophies for building failsafe systems, to debates about 
how aviation is transforming geopolitics and even history. Hence the 
mini annotated bibliography at the end of my mail. So it's funny to read 
that you 'look for something analogous in discursive spaces like this' 
and 'stand for a critique of the relations between capitalism and 
complex systems' — then thank *me* for being 'meta'?! Morlock's 
comparison of Boeing's marketing of critical safety features with luxury 
finishes on cars nailed it. More than that, it's the kind of insight 
that can and should become a rallying cry in efforts to rein in 
megacorps that treat human lives with leather gearshifts as fungible. I 
guess we could say that comparison happens in a 'discursive space,' but 
posh abstractions like that suggest this problem is somehow new and in 
need of vanguardist theorizing. It isn't and doesn't. On the supply 
side, this 737 fiasco is just one more chapter in longstanding labor 
struggles for safe workplaces. Much as the flight attendants' AFA union 
played a pivotal role in ending Trump's government shutdown, I suspect 
that combined statements from the AFA and APFA (the American Airlines FA 
union) that their member won't be forced to fly in 737s sparked the 
Trump admin's sudden turnaround on the 737. On the demand side, the 
tradeoff between safety and 'extra' features was clear enough in 1954 to 
be the punchline of the Daffy Duck cartoon "Design for Leaving": after 
Porky Pig pushes the 'big wed button' marked IN CASE OF TIDAL WAVE in 
his newly automated home, elevating it hundreds of feet 
on a retractable pylon, Daffy Duck appears outside his door in a 
helicopter and says, "For a small price I can install this little blue 
button to get you down."


https://www.dailymotion.com/video/x34az2i

More generally, entire swaths of current 'technology' debates — about 
automation and IoT, 'adversarial' this and that, how advertising is 
subverting democracy, etc, etc – are naive historical reenactments of 
front-page debates from the mid-1950s. Lots of factors enable that 
naivete, and voguish talk about 'complexity' is one of them. It's not an 
accident that complexity became a pop phenomenon starting in the '80s: 
corporations love it because it emphasizes the power of inexorable and 
inevitable  systems rather than our 'simple' power to change them. Sure, 
the rise of computation made the math needed to explore complexity is 
more widely accessible; but the idea that what matters is the secret 
mathematical kinship between the patterns of capillaries in our retinas 
and the structure of whatever we're looking at — tree roots or urban 
spaces or networks — is mostly mystification, barely a step above 
staring at a fractal screensaver. So, when you say you 'stand for a 
critique of the relations between capitalism and complex systems,' I 
agree — just not in the way you intended. Effective critique stands 
*against* that mystification.



(2)

On 23 Mar 2019, at 6:54, Andreas Broeckmann wrote:

friends, call me over-sensitive, but i think that nobody should be 
burned at the stake for anything in any country; i say this also 
because this flippant kind of rhetoric poisons the reasonable debate 
that is so urgently needed on the matters at issue here. (to the 
contrary, i am glad that some civilised countries find forms of 
punishment other than that for actual wrongdoing.) - unfortunately, in 
a world where people get imprisoned and killed for all sorts of 
things, there is little room for such dark humour... when all the 
stakes have been taken down everywhere, we'll be able to laugh about 
this joke again, perhaps.


Andreas, you're over-sensitive. Much as Brian's flight into abstraction 
misdirected discussion away from concrete facts and struggles, your 
focus on the brutality of Morlock's remark — which I'm pretty sure was 
a figure of speech, not a specific advocacy for burning at the stake 
over drawing and quartering or crucifixion — misdirects it away from 
what matters most: penetrating the corporate veils that limit liability. 
If 

Re: rage against the machine

2019-03-23 Thread Andreas Broeckmann
friends, call me over-sensitive, but i think that nobody should be 
burned at the stake for anything in any country; i say this also because 
this flippant kind of rhetoric poisons the reasonable debate that is so 
urgently needed on the matters at issue here. (to the contrary, i am 
glad that some civilised countries find forms of punishment other than 
that for actual wrongdoing.) - unfortunately, in a world where people 
get imprisoned and killed for all sorts of things, there is little room 
for such dark humour... when all the stakes have been taken down 
everywhere, we'll be able to laugh about this joke again, perhaps.

-a


Am 23.03.19 um 05:46 schrieb Keith Hart:
"There is no excuse for such criminal product packaging. Anyone doing 
it or defending it should be burned at stake in any civilized country. 
The fact that it will not happen is the best statement about the times 
we live in."


  I agree. Thank you for the clarity of your writing in this thread, 
much the best that I have seen on this subject.

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-22 Thread Keith Hart
"There is no excuse for such criminal product packaging. Anyone doing it or
defending it should be burned at stake in any civilized country. The fact
that it will not happen is the best statement about the times we live in."

 I agree. Thank you for the clarity of your writing in this thread, much
the best that I have seen on this subject.

It is hard to decide which is more criminal and disgusting, the corporate
plutocracy, the political stooges who cover up for them or the narcissism
of the western media.

After the "too big to fail" banks, the whole contemporary transport system,
starting with cars, planes and the energy industry.

My second home is in South Africa where the power monopoly, Eskom, whose
origins lie in Afrikaner state capitalism, is fighting to prevent a total
blackout in the country which would shut it down for at least three weeks
(remember New York?). Weakened by Zuma's kleptocracy, operating antiquated
plant and now thrown into crisis by the disastrous Cyclone Idai (the
Southern hemisphere's worst ever, coming hard on two consecutive years of
drought) disrupting electricity supplies from Northern neighbours, Eskom is
inflicting ever more frequent and erratic power cuts on us all.

Johannesburg workers have to be sent home early before they are locked in
for the night by electric gates that won't open. When they get out, the
traffic is grid locked anyway. The ANC government faces a general election
in May.

Mass drowning has started in Beira, unsurprisingly not yet on the
Northeastern seaboard.

https://www.news24.com/Africa/News/cyclone-idai-africa-now-has-an-inland-ocean-where-villages-once-stood-20190322

Apocalypse now and a new twist to the Heart of Darkness story. Do you think
it will get any media space with the Mueller report coming out and the
denouement of Brexit?

Keith

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-22 Thread Morlock Elloi
It looks like Boeing hired car salesman. These are the options that you 
may choose to add (and pay extra, like heated seats) with your brand new 
737 MAX purchase:


Option #32: "Angle of attack disagree light": informs pilots about 
discrepancy between nose direction and airflow, in pre-stall condition. 
May mean faulty sensors.


Option #47: "AOA indicator" similar but different. Provides continual 
visualization of airflow vs. nose.


Neither Ethiopian Airlines nor Lion Air chose to pay for these ... options.

There is no excuse for such criminal product packaging. Anyone doing it 
or defending it should be burned at stake in any civilized country. The 
fact that it will not happen is the best statement about the times we 
live in.



ps. I made up option numbers. In reality they are likely 3-digit.
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-18 Thread Nina Temporär
"Rage against machines“ - 
what a perfect alternative title for Article 13 , the new EU Copyright 
Directive that is up for vote next week,
Especially given the extra twist that it is machines executing the human hatred 
on technology’s contribution to
enhanced memetic evolution, of which humans foremost fear that it will reduce 
them to underpaid prompters 
(„You will not replace us“) and demask the notorious concept of the lone 
inventor.

If you’re hot on reflecting on the implementation of automated control systems, 
how about switching to 
Discussing upload filters? :) (Or was here maybe some discussion on that 
already that I missed?)

It seems that if Art. 13 gets passed and automatic scans for resemblances to 
already existing creations will  be
Installed, any ideas attempting to enter the system will soon be treated as 
potential enemies….

T h a t  will be the moment from when on Morlock Elloi will be very right:There 
will be computation -  before 
we’ll be even able to talk capitalism or democracy,


N


> Am 19.03.2019 um 03:24 schrieb Brian Holmes :
> 
> On Mon, Mar 18, 2019 at 3:25 PM tbyfield  <mailto:tbyfi...@panix.com>> wrote:
>  
> It seems like Morlock, who I'd bet has forgotten more about AI than Brian 
> knows, is using it in a loose 'cultural' way; whereas Brian, whose 
> bailiwick is cultural, intends AI in a more ~technical way.
> 
> Ted, I like how you look at disputes from all sides, both for the intrinsic 
> interest of the meta-discussion, and because you put a finger on the very 
> existence of the dispute. For me it boils down to the old question about 
> critique, what it is, how it works, why anyone would engage in such a thing.
> 
> What I care about here is not AI, nor culture in the literary and artistic 
> sense, but attention to reality in a time when lots of things are going 
> wrong. Reality is hard to grasp: you have to look at the relation of human 
> actors with technical systems in a dynamic situations shaped by environmental 
> factors as well conflicting strategic aims. The Boeing case has all that, 
> it's typical of the present. Can such problems be resolved? Or do we just 
> vent our rage against the machine?
> 
> In Morlock's writing I see two things: a justified critique of the reckless 
> speed with which automated control systems are being implemented, plus the 
> continual escalation of an aggressive rhetoric that blurs any distinction 
> whatsoever. The larger cultural/political context and the interplay of 
> conflicting strategies get left out of this entirely: according to his own 
> declarations, things like capitalism or democracy don't exist for Morlock, 
> only computation. That's a tendentially know-nothing approach, and when he 
> throws out any attempt to deal with the technical systems, what's left is the 
> inflammatory rhetoric. There's a good reason not to like it at this 
> particular moment, when every serious attempt at government is blocked by 
> outraged expressions of passions via networked media.
> 
> The story that emerges from the Max 8 crashes is not that the pilots were 
> looking for a fire axe to smash the AI. Instead, most of them were fully 
> aware of the problem. Acting collectively, they shared their knowledge and 
> learned to turn off the poorly conceived patch that was supposed to make up 
> for a bad design. They were struggling against automation, for sure. But they 
> were also struggling against the strategy of a corporation that would do 
> anything to boost its profits -- in this case, first by building a more 
> fuel-efficient plane that wants to nose-dive on take-off, and second, by 
> claiming that crews wouldn't even need training to fly such a thing. 
> Fortunately, the pilots still paid some attention to reality.
> 
> I respect their craft, I'm alive because of it. And you know, unlikely as it 
> may seem, I look for something analogous in discursive spaces like this one. 
> What lurks behind computation and the illusions of control is something more 
> elemental: a compulsive form of greed that denies the fundamentally suicidal 
> nature of its short-term successes. I stand for a critique of the relations 
> between capitalism and complex systems.
> 
> thanks for the meta, Brian
> 
> 
> #  distributed via : no commercial use without permission
> #is a moderated mailing list for net criticism,
> #  collaborative text filtering and cultural politics of the nets
> #  more info: http://mx.kein.org/mailman/listinfo/nettime-l
> #  archive: http://www.nettime.org contact: nett...@kein.org
> #  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultura

Re: rage against the machine

2019-03-18 Thread tbyfield
I'm going to channel a bit of Morlock and Keith, for whom barbs aimed at 
the list have been a semi-regular feature of their emails, because no 
one who's weighed in with an opinion seems to know much about aviation. 
And why would they? I'm not saying anyone should have immersed 
themselves in the arcana of aerial malfunctions, but, absent detailed 
knowledge, discussion degenerates into woolly ideological rambling and 
ranting.


Take this, from Brian's reply to Morlock's original message:


The automatic function is called the Maneuvering Characteristics
Augmentation System (MCAS). Its sole purpose is to correct for an
upward pitching movement during takeoff, brought on by the
decision to gain fuel efficiency by using larger engines. At stake
is a feedback loop triggered by information from Angle of Attack
sensors - nothing that could reasonably be described as AI. The
MCAS is a bad patch on a badly designed plane. In addition to the
failure to inform pilots about its operation, the sensors
themselves appear to have malfunctioned during the Lion Air crash
in Indonesia.


This may be a nice distillation of a specific issue, but it lacks the 
kind of contextual knowledge that Brian values in — and often imposes 
on — areas he has thought about in depth. Like, where does this issue 
sit in a range of alternative schools of thought regarding design, 
integration, and implementation? What are the historical origins of 
Boeing's approach, and when and why did it diverge from other 
approaches? How do those other schools of thought relate to the 
different national / regional traditions and historical moments that 
shaped the relevant institutions? More specifically, how do other plane 
manufacturers address this kind of problem? Where else in the 737 might 
Boeing's approach become an issue? How do these various approaches 
affect the people, individually and collectively, who work with them? 
How do the FAA and other regulatory structures frame and evaluate this 
kind of metaphorical 'black box' in aviation design? Questions like this 
are part of the conceptual machinery of critical discussion. Without 
questions like this, specific explanations are basically an exercise in 
'de-plagiarizing' better-informed sources — rewording and reworking 
more ~expert explanations — to give illusory authority to his main 
point, that 'AI' has nothing to do with it.


But Morlock didn't say 'the relevant system directly implements AI.' He 
can correct me if I'm wrong, but he seemed to be making a more general 
point that faith in 'AI' has fundamentally transformed aviation. More 
specifically, it has redrawn the lines between airframe (basically, the 
sum total of a plane's mechanical infrastructure) and its avionics (its 
electronic systems, more or less) to such a degree that they're no 
longer distinct. But that happened several decades ago; IIRC, as of 1980 
or so some huge fraction of what was then the US's most advanced 
warplane, like 30% or 60% of them, were grounded at any given moment for 
reasons that couldn't be ascertained with certainty because each one 
needed a ground crew of 40–50 people, and the integration systems 
weren't up to the challenge.


Obviously, quite a lot has happened since then, and a big part of it has 
to do with the growing reliance on computation in every aspect of 
aviation. In short, the problem isn't limited to the plane as a 
technical object: it also applies to *the entire process of conceiving, 
designing, manufacturing, and maintaining planes*. This interpenetration 
has become so deep and dense that — at least, this is how I take 
Morlock's point — Boeing, as an organization, has lost sight of its 
basic responsibility: a regime — organizational, conceptual, technical 
— that *guarantees* their planes work, where 'work' means reliably 
move contents from point A to B without damaging the plane or the 
contents.


OK, so AI... What we've got in this thread is a failure to communicate, 
as Cool Hand Luke put it — and one that's hilariously nettime. It 
seems like Morlock, who I'd bet has forgotten more about AI than Brian 
knows, is using it in a loose 'cultural' way; whereas Brian, whose 
bailiwick is cultural, intends AI in a more ~technical way. But that 
kind of disparity in register applies to how 'AI' is used pretty much 
everywhere. In practice, 'AI' is a bunch of unicorn and rainbow stickers 
pasted onto a galaxy of speculative computing practices that are being 
implemented willy-nilly everywhere, very much including the aviation 
sector. You can be *sure* that Boeing, Airbus, Bombardier, Embraer, 
Tupolev, and Comac are awash in powerpoints pimping current applications 
and future promises of AI in every aspect of their operations: financial 
modeling, market projections, scenario-planning, capacity buildout, 
materials sourcing, quality assurance, parametric design, flexible 
manufacturing processes, maintenance and upgrade logistics, etc, etc, 
and — last but not 

Re: rage against the machine

2019-03-18 Thread Joseph Rabie
> Le 17 mars 2019 à 20:48, Morlock Elloi  a écrit :
> 
> Note that autonomous vehicles are becoming affordable assassination 
> instruments. It would cost a fortune a decade ago to create robotic suicide 
> vehicle bomber, so humans were used. Today anyone with basic skills can buy 
> one of these and hack the controls. It's 100% software job. Add some ML and 
> the vehicle can pick victims on its own ("dark skinned males" or "carrying 
> yoga mat" or "MAGA hat", etc.)


This calls for protest from would be martyrs deprived of the power and glory of 
self-magnified destruction.#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-17 Thread Patrick Lichty
Yes, and sorry for being silent so long.

"It would cost a fortune a decade ago to create robotic suicide vehicle bomber, 
so humans were used. Today anyone with basic skills can buy one of these and 
hack the controls. It's 100% software job. Add some ML and the vehicle can pick 
victims on its own ("dark skinned males" or "carrying yoga mat" or "MAGA hat", 
etc.)”

Oh, for sure. I can just imagine Charlottesville or London or Paris with an 
automatic system.  Pick out women with hijabs, or white men with MAGA hats, or 
anyone with a color below the paper bag test.

Remember that that a significant percentage of these Western ones are a small 
but significant part of a dying hegemony that is feeling it is being 
disenfranchised and disemployed, etc. In Charlottesville and Christchurch, the 
call was “You will not replace us”. But the more apt critique is that it isn’t 
the brown on islamic body that might be actually doing a thing.  

It might be the robots, run by their upper class brethren.



> On Mar 18, 2019, at 12:23 AM, John Young  wrote:
> 
> Safety in the real world is like privacy online, far less effective then 
> adveritized. Machines, like buildings and infrastructure, come with inherent 
> hazards: Deaths and injuries are acceptable costs of "convenience," 
> "benefits," "jobs," "progress."
> 
> Finance and insurance are like autocratic national security, secretive, 
> vampiric and bloodthirsty, thriving on misfortune of the unlucky who are 
> targeted by global marketing enterprises and profitability (branding, 
> influencing, celebritization. FirstLook.media a shining example, gander its 
> preening slather underwritten by RU-grade empathy and oligarchy
> 
> Professionals, all of them, are predators state-licensed to provide assurance 
> the public interest is protected, though casualties are to be expected, 
> normalized, in law, medicine, education, construction, incarceration, 
> journalism - racketeer-influenced organizations - above all in government 
> legislation and regulation subject to predatory practices camouflaged by 
> elections and FOI pretense.
> 
> .
> 
> At 03:48 PM 3/17/2019, you wrote:
>> This is deeply ideological and political issue, not technical one. Inserting 
>> code written by middlemen between humans and reality empowers only the 
>> middlemen. Humans are presented by fantasy that adheres to reality when and 
>> in degree being decided by the middlemen.
>> 
>> There is one small step between this and removing all agency from humans (if 
>> not already done). It's like company wants to sack someone, first they make 
>> sure that the sackee's job is irrelevant (someone else controls and does it) 
>> and the sackee cannot do damage.
>> 
>> Well, you are being sacked.
>> 
>> Note that autonomous vehicles are becoming affordable assassination 
>> instruments. It would cost a fortune a decade ago to create robotic suicide 
>> vehicle bomber, so humans were used. Today anyone with basic skills can buy 
>> one of these and hack the controls. It's 100% software job. Add some ML and 
>> the vehicle can pick victims on its own ("dark skinned males" or "carrying 
>> yoga mat" or "MAGA hat", etc.)
>> 
>> 
>>> " But Boeing isn't planning to overhaul its training procedures. And
>>> neither the F.A.A., nor the European Union Aviation Safety Agency
>>> , are proposing additional simulator
>>> training for pilots, according to a person familiar with the
>>> deliberations. Instead, the regulators and Boeing agree that the best
>>> way to inform pilots about the new software is through additional
>>> computer-based training, which can be done on their personal computers."
>> 
>> #  distributed via : no commercial use without permission
>> #is a moderated mailing list for net criticism,
>> #  collaborative text filtering and cultural politics of the nets
>> #  more info: http://mx.kein.org/mailman/listinfo/nettime-l
>> #  archive: http://www.nettime.org contact: nett...@kein.org
>> #  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
> 
> 
> #  distributed via : no commercial use without permission
> #is a moderated mailing list for net criticism,
> #  collaborative text filtering and cultural politics of the nets
> #  more info: http://mx.kein.org/mailman/listinfo/nettime-l
> #  archive: http://www.nettime.org contact: nett...@kein.org
> #  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
> 

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-17 Thread John Young
Safety in the real world is like privacy online, far less effective 
then adveritized. Machines, like buildings and infrastructure, come 
with inherent hazards: Deaths and injuries are acceptable costs of 
"convenience," "benefits," "jobs," "progress."


Finance and insurance are like autocratic national security, 
secretive, vampiric and bloodthirsty, thriving on misfortune of the 
unlucky who are targeted by global marketing enterprises and 
profitability (branding, influencing, celebritization. 
FirstLook.media a shining example, gander its preening slather 
underwritten by RU-grade empathy and oligarchy


Professionals, all of them, are predators state-licensed to provide 
assurance the public interest is protected, though casualties are to 
be expected, normalized, in law, medicine, education, construction, 
incarceration, journalism - racketeer-influenced organizations - 
above all in government legislation and regulation subject to 
predatory practices camouflaged by elections and FOI pretense.


.

At 03:48 PM 3/17/2019, you wrote:
This is deeply ideological and political issue, not technical one. 
Inserting code written by middlemen between humans and reality 
empowers only the middlemen. Humans are presented by fantasy that 
adheres to reality when and in degree being decided by the middlemen.


There is one small step between this and removing all agency from 
humans (if not already done). It's like company wants to sack 
someone, first they make sure that the sackee's job is irrelevant 
(someone else controls and does it) and the sackee cannot do damage.


Well, you are being sacked.

Note that autonomous vehicles are becoming affordable assassination 
instruments. It would cost a fortune a decade ago to create robotic 
suicide vehicle bomber, so humans were used. Today anyone with basic 
skills can buy one of these and hack the controls. It's 100% 
software job. Add some ML and the vehicle can pick victims on its 
own ("dark skinned males" or "carrying yoga mat" or "MAGA hat", etc.)




" But Boeing isn't planning to overhaul its training procedures. And
neither the F.A.A., nor the European Union Aviation Safety Agency
, are proposing additional simulator
training for pilots, according to a person familiar with the
deliberations. Instead, the regulators and Boeing agree that the best
way to inform pilots about the new software is through additional
computer-based training, which can be done on their personal computers."


#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:



#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-17 Thread Morlock Elloi
This is deeply ideological and political issue, not technical one. 
Inserting code written by middlemen between humans and reality empowers 
only the middlemen. Humans are presented by fantasy that adheres to 
reality when and in degree being decided by the middlemen.


There is one small step between this and removing all agency from humans 
(if not already done). It's like company wants to sack someone, first 
they make sure that the sackee's job is irrelevant (someone else 
controls and does it) and the sackee cannot do damage.


Well, you are being sacked.

Note that autonomous vehicles are becoming affordable assassination 
instruments. It would cost a fortune a decade ago to create robotic 
suicide vehicle bomber, so humans were used. Today anyone with basic 
skills can buy one of these and hack the controls. It's 100% software 
job. Add some ML and the vehicle can pick victims on its own ("dark 
skinned males" or "carrying yoga mat" or "MAGA hat", etc.)




" But Boeing isn’t planning to overhaul its training procedures. And
neither the F.A.A., nor the European Union Aviation Safety Agency
, are proposing additional simulator
training for pilots, according to a person familiar with the
deliberations. Instead, the regulators and Boeing agree that the best
way to inform pilots about the new software is through additional
computer-based training, which can be done on their personal computers."


#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-17 Thread Olia Lialina
According to NY Times, 737 MAX 8 pilots were trained on (their own?) iPads. 

What's next? Bring your own cockpit? Like suggested for car sharing interfaces 
by excited UX students all over the world these days. 

Such news scare me more than all the AI horror stories together. This 
banalization or "desktopization"* of high responsibily jobs should be seriously 
questioned. Even when it is technically possible, even if “magic pane of glass” 
has more processing power when onboard computer, even if flight deck software 
is written in Java Script, these are not sufficient reasons for a pilot to have 
it open in one of her browser tabs, even for training. 

may complex systems stay complex in the eyes of their operators

https://www.nytimes.com/2019/03/16/business/boeing-max-flight-simulator-ethiopia-lion-air.html?action=click=Top%20Stories=Homepage

"For many new airplane models, pilots train for hours on giant, 
multimillion-dollar machines, on-the-ground versions of cockpits that mimic the 
flying experience and teach them new features. But in the case of the Max, many 
pilots with 737 experience learned about the plane on an iPad."

" But Boeing isn’t planning to overhaul its training procedures. And neither 
the F.A.A., nor the European Union Aviation Safety Agency, are proposing 
additional simulator training for pilots, according to a person familiar with 
the deliberations. Instead, the regulators and Boeing agree that the best way 
to inform pilots about the new software is through additional computer-based 
training, which can be done on their personal computers."

*in 2014 i wrote about desktopization of remote piloted aircrafts for Interface 
Critique http://contemporary-home-computing.org/RUE/

 Olia Lialina wrote 

>i was rereading today this 5 y. o. article about a decade old accident
>
>https://www.vanityfair.com/news/business/2014/10/air-france-flight-447-crash/amp
>
> following are parts of  IV. Flying Robots and the article's final statement
>
>It takes an airplane to bring out the worst in a pilot.
>[... ] 
>Wiener pointed out that the effect of automation is to reduce the cockpit 
>workload when the workload is low and to increase it when the workload is 
>high. Nadine Sarter, an industrial engineer at the University of Michigan, and 
>one of the pre-eminent researchers in the field, made the same point to me in 
>a different way: “Look, as automation level goes up, the help provided goes 
>up, workload is lowered, and all the expected benefits are achieved. But then 
>if the automation in some way fails, there is a significant price to pay. We 
>need to think about whether there is a level where you get considerable 
>benefits from the automation but if something goes wrong the pilot can still 
>handle it.”
>
>
>Sarter has been questioning this for years and recently participated in a 
>major F.A.A. study of automation usage, released in the fall of 2013, that 
>came to similar conclusions. The problem is that beneath the surface 
>simplicity of glass cockpits, and the ease of fly-by-wire control, the designs 
>are in fact bewilderingly baroque—all the more so because most functions lie 
>beyond view. Pilots can get confused to an extent they never would have in 
>more basic airplanes. When I mentioned the inherent complexity to Delmar 
>Fadden, a former chief of cockpit technology at Boeing, he emphatically denied 
>that it posed a problem, as did the engineers I spoke to at Airbus. Airplane 
>manufacturers cannot admit to serious issues with their machines, because of 
>the liability involved, but I did not doubt their sincerity. Fadden did say 
>that once capabilities are added to an aircraft system, particularly to the 
>flight-management computer, because of certification requirements they become 
>impossibly expensive to remove. And yes, if neither removed nor used, they 
>lurk in the depths unseen. But that was as far as he would go.
>
>
>Sarter has written extensively about “automation surprises,” often related to 
>control modes that the pilot does not fully understand or that the airplane 
>may have switched into autonomously, perhaps with an annunciation but without 
>the pilot’s awareness. Such surprises certainly added to the confusion aboard 
>Air France 447. One of the more common questions asked in cockpits today is 
>“What’s it doing now?” Robert’s “We don’t understand anything!” was an extreme 
>version of the same. Sarter said, “We now have this systemic problem with 
>complexity, and it does not involve just one manufacturer. I could easily list 
>10 or more incidents from either manufacturer where the problem was related to 
>automation and confusion. Complexity means you have a large number of 
>subcomponents and they interact in sometimes unexpected ways. Pilots don’t 
>know, because they haven’t experienced the fringe conditions that are built 
>into the system. 
>
>[... ] 
> At a time when accidents are extremely rare, each one becomes a one-off 

Re: rage against the machine

2019-03-15 Thread Morlock Elloi

It's not just about fun.

If a company/manufacturer/authority samples 'all' possible 
circumstances, and embeds 'required' reactions in the machine, then 
several things happen in the arena of diminishing agency:


- the logic unconditionally reflects authority's ideology, and not the 
one of the human interacting. Yes, there is ideology in driving cars and 
flying airplanes. See https://en.wikipedia.org/wiki/Trolley_problem . 
One corollary of this is that the human is stripped of possibility to do 
evil, and the choice of not doing it (effectively ceasing to be human.)


- human's sensory and cognitive apparatus becomes irrelevant. If you 
don't decide how to drive, fly, cook, fuck (today) and many more things 
tomorrow, WHAT THE FUCK ARE YOU GOING TO BE DECIDING ABOUT? Tea flavors? 
Human activities are finite.


- the human interfacing machine becomes 'user'. As you cannot re-arrange 
a web page or modify its interaction flow, the same now happens with 
cars and airplanes. And everything else.


Do you realize how many fewer human faces you see in your waking hours 
with your attention focused on the handset? The life becomes UI/UX-ed, 
gamified, by someone else. The main purpose of the machines became 
agency transfer. You will be sucked dry.




permutation of the devices we just discussed. This annoys some drivers
who think it deprives them of fun, of the opportunity to sharpen and
demonstrate their driving skills.")


#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-15 Thread Marcell Mars aka Nenad Romic
we still don't know what happened with B737 Max 8 but this thread seemed as
a good one to post references about the machines, automation, people and
disasters..

some people some time ago thought and felt that technical innovations
should be first made comprehensible in order to convince people to buy
them: https://www.youtube.com/watch?v=K4JhruinbWc

it took me some time to find what i remembered and what i needed for
explaining the concept of techno-cultural moment (totally different story
but nvm) and those were the two blog posts jean-louis gassée, once cto of
apple and later the founder of beos, after the scandal, around 2010, with
prius abs brakes software accelerating the car at the moment when it was
supposed to break. ups. (some blog posts one remembers for almost a
decade...)

mondaynote.com went through few major redesigns in last 12 years.. none of
the advanced searches helped to find it in the archives.. only after a
while i realized i could go to wikipedia to check out *when* the scandal
happened and then go back hoping gassée was writing about it when it was in
the news.. and he did.. archive.org did a good job in archiving and
timestamping it...

here are the two blog posts:

http://web.archive.org/web/20100416043943/http://www.mondaynote.com/2010/02/07/soft-brakes-on-the-prius/
("To
summarize: braking isn’t braking anymore, especially in a hybrid. Hitting
the brake pedal triggers software processes that involve optimizing kinetic
energy recovery while maintaining safety and comfort in a wide range of
circumstances, including slippery roads and panic braking. [..] The brake
system and engine braking software has been “improved”, yielding better
kinetic energy recovery by sacrificing some “classical” braking
performance.)

http://web.archive.org/web/20100519034751/http://www.mondaynote.com/2010/03/14/software-and-brakes-part-ii/
(this
one is on car differential (great explanation in the video pasted above)..
"Over time, “resolution” has increased everywhere: sensors capture more
delicate nuances, actuators offer finer control steps, software models of
car motion become more detailed and Moore’s Law makes electronics less
expensive. That’s why all modern cars feature some permutation of the
devices we just discussed. This annoys some drivers who think it deprives
them of fun, of the opportunity to sharpen and demonstrate their driving
skills.")



On Fri, Mar 15, 2019 at 11:12 AM tbyfield  wrote:

> >>> On 14 Mar 2019, at 17:43, Morlock Elloi wrote:
>
> >> On 14 Mar 2019, at 20:26, Olia Lialina wrote:
>
> > On 14 Mar 2019, at 21:58, Brian Holmes wrote:
>
> nettime trifecta  
>
> cheers,
> t
>
> #  distributed via : no commercial use without permission
> #is a moderated mailing list for net criticism,
> #  collaborative text filtering and cultural politics of the nets
> #  more info: http://mx.kein.org/mailman/listinfo/nettime-l
> #  archive: http://www.nettime.org contact: nett...@kein.org
> #  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-15 Thread James Wallbank

Hi All,

This circumstance (increasing complexity introducing critical errors, 
unforseeable by any one developer) is equally true in wider human society.


Individual consumers, businesses and corporations are, effectively, 
subroutines, modules or components of a larger, complex mechanism that 
is the global polity. Dealing successfully with complex systems is just 
not what we humans collectively do.


While this is a significant concern when it comes to self-driving cars, 
self-targeting bombs, or self-crashing aeroplanes, it's considerably 
more pressing when we think about climate. Unless we can develop 
entirely new systems of governance, that don't just come to cleverer 
conclusions, but do so because they are motivated by different factors, 
then a crash is coming. But I think we all know this.


Fridays for the future.

James
=

On 15/03/2019 05:29, Morlock Elloi wrote:
This is the key. Designers do not understand impact of the complexity 
that emerges from combining relatively simple components. This is 
especially amplified in real-time processing of multiple inputs.


In a completely different field (packet switching from millions of end 
points) we had to design separate monitoring system because it was 
impossible to understand what our own system is doing in real time. 
The monitoring code was almost as complex as the switching code. We 
are talking less than 100K lines each.


Airline modules are in millions of code lines. My assessment is that 
human life should not depend on anything with more than 50K lines of 
code total, period. Anyone claiming that there are proper testing 
procedures for huge systems is either a liar on an idiot. Enterprise 
software contractors are often both. The general public has no 
slightest idea of the dismal state of the software development industry.




Sarter said, “We now have this systemic problem with complexity, and it
does not involve just one manufacturer.


#  distributed via : no commercial use without permission
#    is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-15 Thread tbyfield
>>> On 14 Mar 2019, at 17:43, Morlock Elloi wrote:

>> On 14 Mar 2019, at 20:26, Olia Lialina wrote:

> On 14 Mar 2019, at 21:58, Brian Holmes wrote:

nettime trifecta  

cheers,
t

#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-14 Thread Morlock Elloi
This is the key. Designers do not understand impact of the complexity 
that emerges from combining relatively simple components. This is 
especially amplified in real-time processing of multiple inputs.


In a completely different field (packet switching from millions of end 
points) we had to design separate monitoring system because it was 
impossible to understand what our own system is doing in real time. The 
monitoring code was almost as complex as the switching code. We are 
talking less than 100K lines each.


Airline modules are in millions of code lines. My assessment is that 
human life should not depend on anything with more than 50K lines of 
code total, period. Anyone claiming that there are proper testing 
procedures for huge systems is either a liar on an idiot. Enterprise 
software contractors are often both. The general public has no slightest 
idea of the dismal state of the software development industry.




Sarter said, “We now have this systemic problem with complexity, and it
does not involve just one manufacturer.


#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:


Re: rage against the machine

2019-03-14 Thread Prem Chandavarkar

> On 15-Mar-2019, at 2:28 AM, Brian Holmes  wrote:
> 
> There is much to critique in the operations of Boeing and of the FAA. But 
> it's not about AI taking full control. 

https://www.architecturalrecord.com/articles/13464-structural-design-and-thinking-in-approximations
 


This short essay by Robert Silman is about another field totally - structural 
engineering, but the point it makes about our relationship with computers and 
thinking in approximations is significant.  Humans can get a overall ‘feel’ for 
a system that is far more efficient than a computer in understanding the 
holistic character of the system - and to do this requires thinking in 
approximations.

The challenge with the computer is that:
Its capabilities are based in computing power rather than contextual 
understanding, and the learning and decision making in its intelligence comes 
from harnessing this computing power to discern sensible patterns within a host 
of randomly collected factors.  The system works well when it is inserted into 
a context that is within the predictable range of prior learning, but put the 
system into a complex non-linear context (like wind flow, climate, collective 
social choice) and every now and then it will hit a situation that lies outside 
this predictable range.  It then falls apart as its analysis is based on 
finding correlations rather than building empathy or understanding, and it has 
no way of assessing whether the error it finds is minor or major.
It is expected that the human will intervene in such situations.  But because 
these situations are so rare and random, the human gets habituated to the 
routine reality revealed by the computer.  And because the computer can reveal 
tremendous visual detail, the human thinks that he/she is getting a far better 
feel for reality.  The human stops thinking in approximations, loses the ‘feel’ 
for the overall system, and is therefore also ill equipped to deal with crises 
or errors thrown up by the machine.#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-14 Thread Brian Holmes
On Thu, Mar 14, 2019 at 11:43 AM Morlock Elloi 
wrote:

It looks like some cretin in Boeing that drank too much of AI Kool Aid
> (probably a middle manager) decided to install trained logic circuit
> that was supposed to make new aircraft behave (to pilots) like the older
> one. As its operation was far too complicated (ie. even Boeing didn't
> quite understand it) they decided not to inform pilots about it, as it
> could disturb the poor things with too much information.
>
> One part of the unknown operation appears to be the insistence of ML
> black box on crashing the airplane during ascent. As it had full control
> of the trim surfaces there was nothing pilots could do (I guess using
> fire axe to kill the circuit would work, if pilots knew where the damn
> thing was.)
>

I agree there is unwarranted trust in artificial intelligence. But is that
relevant here? Morlock's post neither identifies what systems are at stake,
nor correctly represents the usage situation. It's just inflammatory
rhetoric.

Any look at the press reveals that two complaints about the Max 8 aircraft
were logged anonymously on a NASA database. Pilots reported having to exit
from an automatic trim system in order to stop a nose dive after takeoff.
They did in fact complain that they had not been properly informed about
the operation of the trim system, which is not halted in the usual way (by
simply pulling on the control yoke). However, they were definitely able to
return to manual control, and they did not report using a fire axe to do
it. Instead there are dedicated cutoff switches.

The automatic function is called the Maneuvering Characteristics
Augmentation System (MCAS). Its sole purpose is to correct for an upward
pitching movement during takeoff, brought on by the decision to gain fuel
efficiency by using larger engines. At stake is a feedback loop triggered
by information from Angle of Attack sensors - nothing that could reasonably
be described as AI. The MCAS is a bad patch on a badly designed plane. In
addition to the failure to inform pilots about its operation, the sensors
themselves appear to have malfunctioned during the Lion Air crash in
Indonesia.

You can find real information on the situation here:
https://theaircurrent.com/tag/maneuvering-characteristics-augmentation-system

There is much to critique in the operations of Boeing and of the FAA. But
it's not about AI taking full control. Punditry based on mere imaginings is
just hot air.

Brian
#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject:

Re: rage against the machine

2019-03-14 Thread Olia Lialina
i was rereading today this 5 y. o. article about a decade old accident

https://www.vanityfair.com/news/business/2014/10/air-france-flight-447-crash/amp

 following are parts of  IV. Flying Robots and the article's final statement

It takes an airplane to bring out the worst in a pilot.
[... ] 
Wiener pointed out that the effect of automation is to reduce the cockpit 
workload when the workload is low and to increase it when the workload is high. 
Nadine Sarter, an industrial engineer at the University of Michigan, and one of 
the pre-eminent researchers in the field, made the same point to me in a 
different way: “Look, as automation level goes up, the help provided goes up, 
workload is lowered, and all the expected benefits are achieved. But then if 
the automation in some way fails, there is a significant price to pay. We need 
to think about whether there is a level where you get considerable benefits 
from the automation but if something goes wrong the pilot can still handle it.”


Sarter has been questioning this for years and recently participated in a major 
F.A.A. study of automation usage, released in the fall of 2013, that came to 
similar conclusions. The problem is that beneath the surface simplicity of 
glass cockpits, and the ease of fly-by-wire control, the designs are in fact 
bewilderingly baroque—all the more so because most functions lie beyond view. 
Pilots can get confused to an extent they never would have in more basic 
airplanes. When I mentioned the inherent complexity to Delmar Fadden, a former 
chief of cockpit technology at Boeing, he emphatically denied that it posed a 
problem, as did the engineers I spoke to at Airbus. Airplane manufacturers 
cannot admit to serious issues with their machines, because of the liability 
involved, but I did not doubt their sincerity. Fadden did say that once 
capabilities are added to an aircraft system, particularly to the 
flight-management computer, because of certification requirements they become 
impossibly expensive to remove. And yes, if neither removed nor used, they lurk 
in the depths unseen. But that was as far as he would go.


Sarter has written extensively about “automation surprises,” often related to 
control modes that the pilot does not fully understand or that the airplane may 
have switched into autonomously, perhaps with an annunciation but without the 
pilot’s awareness. Such surprises certainly added to the confusion aboard Air 
France 447. One of the more common questions asked in cockpits today is “What’s 
it doing now?” Robert’s “We don’t understand anything!” was an extreme version 
of the same. Sarter said, “We now have this systemic problem with complexity, 
and it does not involve just one manufacturer. I could easily list 10 or more 
incidents from either manufacturer where the problem was related to automation 
and confusion. Complexity means you have a large number of subcomponents and 
they interact in sometimes unexpected ways. Pilots don’t know, because they 
haven’t experienced the fringe conditions that are built into the system. 

[... ] 
 At a time when accidents are extremely rare, each one becomes a one-off event, 
unlikely to be repeated in detail. Next time it will be some other airline, 
some other culture, and some other failure—but it will almost certainly involve 
automation and will perplex us when it occurs. Over time the automation will 
expand to handle in-flight failures and emergencies, and as the safety record 
improves, pilots will gradually be squeezed from the cockpit altogether. The 
dynamic has become inevitable. There will still be accidents, but at some point 
we will have only the machines to blame.

 Morlock Elloi wrote 

>Handling of the recent B737 Max 8 disaster is somewhat revealing.
>
>What seems to have happened (for the 2nd time) is that computing machine 
>fought the pilot, and the machine won.
>
>It looks like some cretin in Boeing that drank too much of AI Kool Aid 
>(probably a middle manager) decided to install trained logic circuit 
>that was supposed to make new aircraft behave (to pilots) like the older 
>one. As its operation was far too complicated (ie. even Boeing didn't 
>quite understand it) they decided not to inform pilots about it, as it 
>could disturb the poor things with too much information.
>
>One part of the unknown operation appears to be the insistence of ML 
>black box on crashing the airplane during ascent. As it had full control 
>of the trim surfaces there was nothing pilots could do (I guess using 
>fire axe to kill the circuit would work, if pilots knew where the damn 
>thing was.)
>
>That's what the best available info right now is on what was the cause.
>
>What is interesting is how this was handled, particularly in the US:
>
>- There were documented complaints about this circuit for long time;
>- FAA ignored them;
>- After the second disaster most of the world grounded this type of 
>aircraft;
>- FAA said there is nothing wrong 

rage against the machine

2019-03-14 Thread Morlock Elloi

Handling of the recent B737 Max 8 disaster is somewhat revealing.

What seems to have happened (for the 2nd time) is that computing machine 
fought the pilot, and the machine won.


It looks like some cretin in Boeing that drank too much of AI Kool Aid 
(probably a middle manager) decided to install trained logic circuit 
that was supposed to make new aircraft behave (to pilots) like the older 
one. As its operation was far too complicated (ie. even Boeing didn't 
quite understand it) they decided not to inform pilots about it, as it 
could disturb the poor things with too much information.


One part of the unknown operation appears to be the insistence of ML 
black box on crashing the airplane during ascent. As it had full control 
of the trim surfaces there was nothing pilots could do (I guess using 
fire axe to kill the circuit would work, if pilots knew where the damn 
thing was.)


That's what the best available info right now is on what was the cause.

What is interesting is how this was handled, particularly in the US:

- There were documented complaints about this circuit for long time;
- FAA ignored them;
- After the second disaster most of the world grounded this type of 
aircraft;

- FAA said there is nothing wrong with it;
- It seems that intervention from White House made FAA see the light and 
ground the planes.


Why? What was so special about this bug? FAA previously had no problem 
grounding planes on less evidence and fewer complaints.


It may have to do with the first critical application of the new deity 
in commercial jets. The deity is called "AI", and its main function is 
to deflect the rage against rulers towards machines (it's the 2nd 
generation of the concept, the first one was simply "computer says ...".)


FAA's hesitation may make sense. After several hundred people have been 
killed, someone will dig into the deity, and eventually the manager 
idiot and its minions will be declared (not publicly, of course) the 
guilty party. This could be a fatal blow to the main purpose of the deity.



(BTW, 'rage' is also a verb)




#  distributed via : no commercial use without permission
#is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nett...@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: