You are right. My racewalking buddy and college classmate, a Doctor
Professor (retired) on the Yale Medical School faculty,
is engaged in Big Data regarding reading tissue data as to whether it is
carcinogenic. Right now that is entirely done by visual inspection of
doctors using their personal judgement. Doctor's pay will be reduced if
they succeed.
Richard

On Fri, Nov 21, 2014 at 8:06 AM, <[email protected]> wrote:

>
>
> On Friday, November 21, 2014 12:39:14 PM UTC, [email protected] wrote:
>>
>>
>>
>> On Sunday, November 16, 2014 10:56:37 AM UTC, Bruno Marchal wrote:
>>>
>>>
>>> On 16 Nov 2014, at 08:45, LizR wrote:
>>>
>>> On 16 November 2014 07:42, John Clark <[email protected]> wrote:
>>>
>>>> On Sat, Nov 15, 2014 at 12:39 PM, <[email protected]> wrote:
>>>>
>>>> > The idea that computers are people has a long and storied history.
>>>>>
>>>>
>>>> I would maintain that from a long term operational viewpoint it doesn't
>>>> matter if the humans on the Supreme Court consider computers to be people
>>>> or not, the important thing is if computers consider humans to be people or
>>>> not.
>>>>
>>>> Making certain probably reasonable assumptions, that is quite likely.
>>>
>>>
>>>
>>> Only if we remember that money is a tool, and not a goal. If money is
>>> the goal, machines will correctly conclude that humans are not affordable:
>>> they need 02, plants, a very rich and complex environment, etc. But with
>>> some luck we will be digital before, and get more affordable in the
>>> machine's point of view.
>>>
>>> To say that corporation are person is, imo, a rather big error. Only
>>> machine having the Löbian ability can be considered as person, and
>>> corporations are not.
>>>
>>>
>> What he said that was most new for me was, the supreme court may decide
>> corporations are individuals or not, but that algorithms increasingly
>> define corporations, and what those programs do, they have not say over at
>> all.
>>
>> The damaging mythology was the way a small cadre of
>> technologist-computationalist-futurist self-reinforce themselves into an
>> unchallenged space of defining the vision for A.I. in wholly positive and
>> historical inevitable terms. A.I. is coming, it's here now, it's going to
>> change everything, it'll be better, it'll be the better version of us even.
>>
>> Which gets the same structure of delayed response that ultimately because
>> dominated by the merchants of doom who think this is going to end badly,
>> either A.I. here, or alien A.I. Which reinforces the next version of the
>> same version of the positive cadre emitted before. It becomes invariant.
>>
>> Which would be fine, but neither one of the scenarios are anything like
>> reflective of what is taking place on the ground. A.I. is no closer than
>> it was 20 or 30 or 40 years ago. But what is new and big is Big Data. But
>> Big Data does not involve theories of A.I. nor efforts. it's about taking
>> very large sets of paired data and converging by some basic rule to a
>> single thing. This is how translation services work. It's very large sets
>> of translations of sentences, and sentence components, simply rehashed for
>> best fit to the text in translation.
>>
>
> It actually works fairly adequately for most translation needs. Which
> would be great, except this:
>
> -- The Big Data system is not independent at any point. Every day there
> needs to be a huge scrape of the translations performed by human
> translators.
>
> -- Human translation professions are in a state of freefall. There used to
> be a career structure with rising income and security and status. Now there
> isn't. Now, there isn't even a diary scheduling up coming translation
> contracts, the requirements and the research project timelines that there
> used to be. Now it's much more a 'realtime' industry. You be available and
> up to date. You be available first for a job if one comes up. It might. Or
> it might not, today. It's back to hand to mouth for them.
>
> -- which would be a case of "so...wheels of change....relocate, retrain,
> already". Save, the Big Data that has brought this about - the algorithm
> defining the corporation, cannot operate unless those translators stay in
> post. The Big Data system takes from them every day, but does not ask or
> receive permission, and does not pay them, and by another draft under the
> floorboards sucks their years coming specialism away..and their dreams and
> life-plan.
>
> The other salient insight he mentioned was that Big Data, such as it is,
> is most easily established in those transactions that naturally involve a
> degree of manipulation. Seduction, misdirection....like dating sites. Or
> personal activities in the real and cyber/financial landscape of servicing
> consumption. The shopping trail. Browsers, footprints.
>
> Because manipulating behaviour in complex ways is something Big Data is
> well positioned to do. It can learn...purely from statistical modelling and
> the daily scrape. A.I. you can forget about until there's a little new
> progress. But corporate algorithms that synthetically  mirror intelligent
> behaviour, specifically around convergences relating to human malleability
> is a serious issue that is not understood and needs to be now. Because at
> present, what those algorithms are given to do, is suck the life out of
> structured professions with reward pathways for hard work, gift and skill.
>
> And what they are ABLE to do, and have a vested real and present critical
> interest that it be done, so are reasonably already in the process
> of'seeing done', either directly or with indirect manipulation, the
> reducing of professions like translation to hand-to-mouth zero-hour
> commoditized service industries. IN which the translators themselves face
> obstructions and psychological 'scratch-card' lottery wishful hopes
> tomorrow will be better kind of thing, that deliberately by algorithms
> control their decisions and their status, to keep them in place, doing the
> free translations Big Data algorithms still need every day.
>
> That's a serious matter...even free market libertarians will learn of
> that, with recognition they need to turn this over and think.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to