Dear James, 

anonymity may not be important to presenting these ideas in general, but I 
prefer to stay pseudonymous in this case.



On 22.3.2023 at 4:16 PM, "James Foster via Pharo-users" 
<pharo-users@lists.pharo.org> wrote:
>
>Are you willing to sign your name to this? Is anonymity important 
>to presenting these ideas?
>
>James Foster
>
>> On Mar 22, 2023, at 5:34 AM, in_pharo_users--- via Pharo-users 
><pharo-users@lists.pharo.org> wrote:
>> 
>> Offray,  and to all others,
>> 
>> you are missing the issue.
>> 
>> The problem we face is not to measure 'intelligence' of a 
>system, but it's ability to verbally act indistinguishable from a 
>human.
>> 
>> This ability is allready given as chatbots are accepted by 
>millions of users, f.i. as user interfaces. (measurement = 'true', 
>right?)
>> 
>> ChatGPT has the ability to follow a certain intention, f.i. to 
>convince the user to buy a certain product.  For this purpose, 
>chat bots are getting  now equipped with life like portrait 
>pictures, speech input and output systems with life like voices, 
>phone numbers that they can use to make calls or being called.  
>They are fed with all available data on the user, and we know that 
>ALL information about every single internet user in available and 
>is being consolidared on necessity.  The chat bots are able to use 
>this information to guide their conversational strategy, as the 
>useful aspects of the users mindset are extracted from his 
>internet activity.
>> 
>> These chat bots are now operated on social network platforms 
>with life like names, 'pretending' to be human.
>> 
>> These bots act verbally indistinguishable from humans for most 
>social media users, as the most advanced psychotronic technology 
>to manufacture consent.
>> 
>> The first goal of such a propaganda will naturally be to 
>manufacture consent about humans accepting being manipulated by AI 
>chat bots, right?
>> 
>> How can this be achieved?  
>> 
>> Like allways in propaganda, the first attempt is to 
>> - suppress awareness of the propaganda, then 
>> - suppress the awareness of the problematic aspects of the 
>propaganda content, then 
>> - reframe the propaganda content as acceptable, then as 
>something to wish for,
>> - achive collaboration of the propaganda victim with the goals 
>of the propaganda content.
>> 
>> Interestingly, this is exactly the schema that your post 
>follows, Offray.
>> 
>> This often takes the form of domain framing, like we see in our 
>conversation:  the problem is shifted to the realm of academics - 
>here informatics/computer sciences - and thus delegated to experts 
>exclusively.  We saw this in the 9/11 aftermath coverup.
>> 
>> Then, Offray, you established yourself as an expert in color, 
>discussing aspects that have allready been introduced by others 
>and including the groups main focus 'Smalltalk', thus 
>manufacturing consent and establishing yourself as a reliable 
>'expert', and in reverse trying to hit at me, whom you have 
>identified as an adversary.
>> 
>> Then you offered a solution in color to the problem at hand with 
>'traceable AI' and thus tried to open the possibility of 
>collaboration with AI proponents for the once critical reader.
>> 
>> I do not state, Offray, that you are knowingly an agent to 
>promote the NWO AI program.  I think you just 'learned' / have 
>been programmed to be a successful academic software developer, 
>because to be successful in academics, it is neccessary to learn 
>to argue just like that since the downfall of academic science in 
>the tradition of, let's say, Humboldt.  So, I grant that you may 
>be a victim of propaganda yourself, instead of being a secret 
>service sposored agent. You took quite some time to formulate your 
>post, though.
>> 
>> You acted to contain the discussion about AI in this vital and 
>important informatics community to technical detail, when it is 
>neccessary that academics and community members look beyond the 
>narrow borders of their certifications and shift their thinking to 
>the point of view where they can see what technology does in the 
>real world.
>> 
>> 
>> 
>> 
>> 
>> On 21.3.2023 at 7:21 PM, "Offray Vladimir Luna Cárdenas" 
><offray.l...@mutabit.com> wrote:
>>> 
>>> I agree with Richard. The Turing test is not a good one to test 
>>> intelligence and we have now just over glorified Eliza chatbots 
>>> that 
>>> appear to think and to understand but do none of them. ...

Reply via email to