The main defects of both R and Python are a lack of a typing system and high
performance compilation. I find R still follows (is used by) the statistics
research community more than Python. Common Lisp was always better than
either.
Sent from my iPhone
On Jan 8, 2023, at 11:03 AM, Russ Abbo
I learned most everything I know about thermoacoustic heat engines while
trying to read those papers, then I went back to the day job hacking code.
-- rec --
On Sun, Jan 8, 2023 at 6:34 AM David Eric Smith wrote:
> The thermoacousktic one is interesting, and surprises me a bit.
>
> I worked on
As indicated in my original reply, my interest in this project grows from
my relative ignorance of Deep Learning. My career has focussed exclusively
on symbolic computing. I've worked with and taught (a) functional
programming, logic programming, and related issues in advanced Python; (b)
complex s
So there’s a “reply” (or whatever) that I have had an impulse to post for two
weeks now, but had to forbid myself the frivolity of writing.
Also, having seen the recent posts, I think it is already resident in
everything Glen takes for granted as having settled from our years of
conversation on
The thermoacousktic one is interesting, and surprises me a bit.
I worked on these systems a bit in the mid-1990s, when in a kind of purgatory
in a navy research lab that mostly did acoustics.
Broadly, there are two limiting cases for a thermoacoutic engine. One uses a
standing wave and is simp
Dispositional belief (by which I mean acting as if you believe) in some thing
requires there be a somewhat coherent thing in which to believe, whether or not
that thing actually exists (e.g. a mathematical limit). That's necessary for
*progression*. Admittedly, there can be a ball of uncertaint
This smacks of Feferman's claim that "implicit in the acceptance of given schemata
is the acceptance of any meaningful substitution instances that one may come to meet, but
which those instances are is not determined by restriction to a specific language fixed
in advance." ... or in the languag
Yes, the money/expertise bar is still pretty high. But TANSTAAFL still applies. And the overwhelming evidence is
coming in that specific models do better than those trained up on diverse data sets, "better" meaning
less prone to subtle bullsh¡t. What I find fascinating is tools like OpenAI *faci
I have finished a number of Coursera courses recently, including "Deep Learning
& Neural Networks with Keras" which was ok but not great. The problems with
deep learning are* to achieve impressive results like chatGPT from OpenAi or
LaMDA from Goggle you need to spend millions on hardware * only