Acknowledging the Perfection of our Lord

No change should there be in the creation of Allah [Quran 30:30] 
Mission of the Messengers - XXIX  




Abstract 
To do تَسْبِيحَ of Allah means to acknowledge, declare, and/or celebrate that 
Allah is absolutely perfect. Allah creates perfectly and governs excellently. 
We humans need to acknowledge and appreciate this fact, and consequently submit 
to The Right Religion (الدِّينُ الْقَيِّمُ). 


Full Text
https://signsandscience.blogspot.com/2018/10/acknowledging-perfection-of-our-lord.html
  


> On 14-Mar-2023, at 6:48 PM, John Clark <johnkcl...@gmail.com> wrote:
> 
> 
>> On Tue, Mar 14, 2023 at 9:44 AM Samiya Illias <samiyaill...@gmail.com> wrote:
>> 
>> > Aren’t you an emergent property of the same system that you are 
>> > criticising? 
> 
> Yes.
> 
> John K Clark    See what's on my new list at  Extropolis
> uyc
> 
> 
> 
>  
>> 
>> 
>> 
>>>> On 14-Mar-2023, at 5:49 PM, John Clark <johnkcl...@gmail.com> wrote:
>>>> 
>>> 
>>> On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes <te...@telmomenezes.net> 
>>> wrote:
>>> 
>>>>> > One of the authors of the article says "It’s interesting that the 
>>>>> > computer-science field is converging onto what evolution has 
>>>>> > discovered", he said that because it turns out that 41% of the fly 
>>>>> > brain's neurons are in recurrent loops that provide feedback to other 
>>>>> > neurons that are upstream of the data processing path, and that's just 
>>>>> > what we see in modern AIs like ChatGPT.
>>>> 
>>>> 
>>>> > I do not think this is true. ChatGPT is a fine-tuned Large Language 
>>>> > Model (LLM), and LLMs use a transformer architecture, which is deep but 
>>>> > purely feed-forward, and uses attention heads. The attention mechanism 
>>>> > was the big breakthrough back in 2017, that finally enabled the training 
>>>> > of such big models:
>>> 
>>> I was under the impression that transformers are superior to recurrent 
>>> neural networks because recurrent processing of data was not necessary with 
>>> transformers so more paralyzation is possible than with recursive neural 
>>> networks; it can analyze an entire sentence at once and doesn't need to do 
>>> so word by word.  So Transformers learn faster and need less trading data.
>>> 
>>>> > My intuition is that if we are going to successfully imitate biology we 
>>>> > must model the various neurotransmitters.
>>> 
>>> That is not my intuition. I see nothing sacred in hormones, I don't see the 
>>> slightest reason why they or any neurotransmitter would be especially 
>>> difficult to simulate through computation, because chemical messengers are 
>>> not a sign of sophisticated design on nature's part, rather it's an example 
>>> of Evolution's bungling. If you need to inhibit a nearby neuron there are 
>>> better ways of sending that signal then launching a GABA molecule like a 
>>> message in a bottle thrown into the sea and waiting ages for it to diffuse 
>>> to its random target.
>>> 
>>> I'm not interested in brain chemicals, only in the information they 
>>> contain, if somebody wants  information to get transmitted from one place 
>>> to another as fast and reliablely as possible, nobody would send smoke 
>>> signals if they had a fiber optic cable. The information content in each 
>>> molecular message must be tiny, just a few bits because only about 60 
>>> neurotransmitters such as acetylcholine, norepinephrine and GABA are known, 
>>> even if the true number is 100 times greater (or a million times for that 
>>> matter) the information content of each signal must be tiny. Also, for the 
>>> long range stuff, exactly which neuron receives the signal can not be 
>>> specified because it relies on a random process, diffusion. The fact that 
>>> it's slow as molasses in February does not add to its charm.  
>>> 
>>> If your job is delivering packages and all the packages are very small, and 
>>> your boss doesn't care who you give them to as long as they're on the 
>>> correct continent, and you have until the next ice age to get the work 
>>> done, then you don't have a very difficult profession.  Artificial neurons 
>>> could be made to communicate as inefficiently as natural ones do by 
>>> releasing chemical neurotransmitters if anybody really wanted to, but it 
>>> would be pointless when there are much faster, and much more reliable, and 
>>> much more specific ways of operating.
>>> 
>>> John K Clark    See what's on my new list at  Extropolis
>>> kuh
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/everything-list/CAJPayv089oC%3DAc-DswW5simNfWzQsGAZADjusaWOacE4M6kt9g%40mail.gmail.com.
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/everything-list/7E212EF5-8533-484A-AA62-BEF352C9C1D4%40gmail.com.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv3Yx%2BZJx-iQ0xJn0FLWdAu%2Bec9vQ17BzZLyqwa%3DS2oNnw%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1D969074-D10A-448F-BE9B-77A3FE074A48%40gmail.com.

Reply via email to