Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread immortal . discoveries
as some reference to w2v/Glove: https://www.kdnuggets.com/2018/04/implementing-deep-learning-methods-feature-engineering-text-data-glove.html -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread immortal . discoveries
40,000x40,000=1,600,000,000 40,000x500=20,000,000 x80 more storage With 40,000 word vocab, 16 bits per word if huffman them. Of course, we can limit what we link.we can link words ONLY IF they pass a similarity threshold. So cat and hose wouldn't link. Of course in context like at a store

Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread immortal . discoveries
To find out how many items this design at right would have for 1,000 words, simply do 1,000x1,000=1,000,000. You can see there is 25 items for 5 words, and would have 36 items for 6 words, and 16 items for 4 words. If w2v had 1,000 words and 500 dimensions, it'd be 1,000 words x 500 axis items,

Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread immortal . discoveries
https://ibb.co/CVTW3qL -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta664aad057469d5c-Mac9a21c8296e38542325e535 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread immortal . discoveries
Oh and that heterarchy can be displayed as a viz too. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta664aad057469d5c-M214d0a4cb79eedbc0e9478e4 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread immortal . discoveries
Do you mean, instead of feeding the net data and learning, to instead request new output data/solutions? Doing so is how we train ourselves, mentally. We can step forward, chaining. We store the data output AND update the model. If you know enough knowledge, you can stop eating data and start

Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread Rob Freeman
On Mon, Oct 28, 2019 at 1:48 AM wrote: > No I meant Word2Vec / Glove. They use a ex. 500 dimensional space to > relate words to each other. If we look at just 3 dimensions with 10 dots > (words) we can visualize how a word is in 3 superpositions entangled with > other dots. > Pity. I thought

Re: [agi] Re: I completely hate today's mainstream AI (Google etc.)

2019-10-27 Thread John Rose
With mainstream AI it's a love-hate relationship. From nothing you can spin up in minutes colossal compute resources in their clouds and then shut it down and pay a small fee. It’s all scriptable. “How I Learned to Stop Worrying and Love the Cloud”… --

Re: [agi] Re: putting models in your robots head

2019-10-27 Thread John Rose
If you model the unknown it is not exact except for what you know. So you have to model check on what you know or model check your prediction... if that makes sense... who knows... checking... -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: I completely hate today's mainstream AI (Google etc.)

2019-10-27 Thread Stefan Reich via AGI
I think the problem with the whole neural network approach is that it doesn't teach us anything. By using infinite computing resources, certain problems can be solved (although NN are never even error-free!), but what next? It still seems so fake. On Sun, 27 Oct 2019 at 20:51, wrote: > "Many

Re: [agi] Consistently Lower Error On Test Data Than Train Data?

2019-10-27 Thread James Bowery
I'm using Keras (with TensorFlow). It turns out this is an artifact of the way the Keras library does model validation. From the FAQ A Keras model has two modes: training and testing.

[agi] Re: I completely hate today's mainstream AI (Google etc.)

2019-10-27 Thread Stefan Reich via AGI
OK well I don't completely hate them. I just think the current distribution of wealth and influence sucks On Sun, 27 Oct 2019 at 15:51, Stefan Reich < stefan.reich.maker.of@googlemail.com> wrote: > Let's use petabytes of data! Let's show off how much $$$ we have by buying > incredible

Re: [agi] Re: I completely hate today's mainstream AI (Google etc.)

2019-10-27 Thread Stefan Reich via AGI
I like this post. On Sun, 27 Oct 2019 at 19:12, wrote: > Luckily we have this list where we get together and share knowledge. One > day, possibly soon, the knowledge we share will be highly readable and will > make sense to most members. Only then will everyone get closer thinking and > feel

[agi] Re: Consistently Lower Error On Test Data Than Train Data?

2019-10-27 Thread immortal . discoveries
Smaller test set = smaller errors? -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tbd9e6404771d6b20-M17dcbe1f702438c53db28c53 Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Consistently Lower Error On Test Data Than Train Data?

2019-10-27 Thread James Bowery
I'm seeing a rather strange phenomenon in training an LSTM on a time series.  I'm training it on early data and testing on later data.  After say 100 epochs the test data produces lower error than the train data.  This could just be a coincidence, but since the test data is about 25% of a total

[agi] Re: I completely hate today's mainstream AI (Google etc.)

2019-10-27 Thread immortal . discoveries
Do you need a mentor to explain 2+2=4? No, a ultra small pdf will do. It'll work for most humans. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T10119d5c27aad6be-M69ed6247eaf3e8aec1f005c3 Delivery options:

[agi] Re: I completely hate today's mainstream AI (Google etc.)

2019-10-27 Thread immortal . discoveries
Luckily we have this list where we get together and share knowledge. One day, possibly soon, the knowledge we share will be highly readable and will make sense to most members. Only then will everyone get closer thinking and feel more like a team. Right now a lot of knowledge is all over the

Re: [agi] Re: putting models in your robots head

2019-10-27 Thread rouncer81
what about never ending procrastination!!!  that gets the company forward!! -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T8fe5317c3cebf70b-Me49f060c41d8a6f6ab0f020d Delivery options:

[agi] Re: I completely hate today's mainstream AI (Google etc.)

2019-10-27 Thread rouncer81
1 man army. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T10119d5c27aad6be-M328cead92fe213e3e84ea035 Delivery options: https://agi.topicbox.com/groups/agi/subscription

[agi] Re: I completely hate today's mainstream AI (Google etc.)

2019-10-27 Thread immortal . discoveries
Big Data Big Compute Big Money Big Company Big Error Big Confusion If you utilized small data to its fullest potential, it'd be more like: Small Data Small Compute Small Money Small Company Small Error Small Confusion We still need the grand neural schema though. One that fits the Glove over

Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread rouncer81
thats really good, imagine if it was phonemes in sonic space, and thats audio recognition. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta664aad057469d5c-Mb38aced494fa110324e344c2 Delivery options:

Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread rouncer81
yeh.   thats good.   every dimension of the cube is another context,  get 3 axi there - got 3 times the memory.   but the answer to me is that never ending t document only being 1byte or whatever.  that looks better to me. -- Artificial General

Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread immortal . discoveries
No I meant Word2Vec / Glove. They use a ex. 500 dimensional space to relate words to each other. If we look at just 3 dimensions with 10 dots (words) we can visualize how a word is in 3 superpositions entangled with other dots. -- Artificial General

[agi] I completely hate today's mainstream AI (Google etc.)

2019-10-27 Thread Stefan Reich via AGI
Let's use petabytes of data! Let's show off how much $$$ we have by buying incredible hardware! Let's just make a neural network that solves certain tasks to a degree of, like, 70%... and let's have no idea how it actually does it. It's really time for something new.

Re: [agi] Re: DEAR ASK A ROBOT: Who is God?

2019-10-27 Thread John Rose
Overrides to prevent madness, I would say using logic as an out-of-band reasoner to limit it... but then the logic could distort.. argh... -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: putting models in your robots head

2019-10-27 Thread John Rose
All models are approximations and there is compute and time distance from the modeled. And then the model checker is important, I pursue dynamic models and dynamic checking: https://en.wikipedia.org/wiki/Model_checking -- Artificial General Intelligence

Re: [agi] Re: DEAR ASK A ROBOT: Who is God?

2019-10-27 Thread rouncer81
haha.   remember that cheesy old google story about the robot developing madness.   what goes into a machine, is what comes out. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: putting models in your robots head

2019-10-27 Thread rouncer81
I think you have to sample to do a.i.,   because whats the point if youve got a model but you arent sampling from it?  A physics engine isnt any good until you put enough through it.  thats the computational expense. -- Artificial General Intelligence

Re: [agi] Re: putting models in your robots head

2019-10-27 Thread rouncer81
yes, just think how sparse the letters are from each other, and theres only 26, nice and approximate, learns instantaniously. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Re: putting models in your robots head

2019-10-27 Thread John Rose
You need to have accurate ways temporally to disregard accuracy. Muilt-modelling based on computational resource availability choosing between quick and accurate. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread John Rose
Magic is not absolute, it's local. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-M166921cfa1c146ba8dfa93b2 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: putting models in your robots head

2019-10-27 Thread rouncer81
Keep a few gems but the rest are security rubbish,  inventors getting owny about what they do, delivers very poor explanations. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread rouncer81
absolute magic indeed. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-M497cb2e50a957931f43f23aa Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread John Rose
Intelligence is percepted, and perception is intelligenced. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-Mf3a87918fea0a094c347e62d Delivery options:

Re: [agi] Re: putting models in your robots head

2019-10-27 Thread Stefan Reich via AGI
Papers cause my brain instant pain. So long and boring On Sun, 27 Oct 2019 at 14:22, Brett N Martensen wrote: > You should read this - Building Machines That Learn and Think Like > Peopleby Lake, B. M. et al > http://web.stanford.edu/class/psych209/Readings/LakeEtAlBBS.pdf > pages 16-19

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread rouncer81
to know what someone does, you need to repeat his brain. then look in all possible futures.   and then just be disappointed because its was just an idiots head. -- Artificial General Intelligence List: AGI Permalink:

[agi] Re: putting models in your robots head

2019-10-27 Thread Brett N Martensen
You should read this -   Building Machines That Learn and Think Like People    by Lake, B. M. et al http://web.stanford.edu/class/psych209/Readings/LakeEtAlBBS.pdf pages 16-19  Section 4.1.1 Intuitive physics -- Artificial General Intelligence List: AGI

Re: [agi] Re: DEAR ASK A ROBOT: Who is God?

2019-10-27 Thread John Rose
Yes but there is that adolescent rebellion to deal with. It could do the opposite so reverse psychology might be needed. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T22987f362748a39b-M5452a3408d1fc5594c80472d

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread John Rose
Yes but the predictors are getting more and more accurate. All the body language, micro-expressions, electrochemical and electromagnetic emissions, historical big data, it will be nearly impossible to deceive… and then future people will be modeled and predicted. Your predicted future can be

Re: [agi] Re: DEAR ASK A ROBOT: Who is God?

2019-10-27 Thread rouncer81
Make sure ur robot exhibits its makers philosophy! -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T22987f362748a39b-M4df762ffe9da9d1ee01973b2 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] Re: DEAR ASK A ROBOT: Who is God?

2019-10-27 Thread Stefan Reich via AGI
This world is incomplete and faulty. But there is more On Sun, 27 Oct 2019 at 13:40, John Rose wrote: > We are all subservient to buggy code. > > Advice to newborns: Accept your predetermined role as a dispensable beta > tester of this computational world. Imperfection is why you have arrived

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread rouncer81
To predict the future of something, you be it,  but its hard to tell from looking at someone whats in his head. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-Ma3a4a6a8fa147b0f62b244c0 Delivery

Re: [agi] Re: DEAR ASK A ROBOT: Who is God?

2019-10-27 Thread John Rose
We are all subservient to buggy code. Advice to newborns:  Accept your predetermined role as a dispensable beta tester of this computational world.  Imperfection is why you have arrived here and why you will leave someday. -- Artificial General

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread John Rose
We, humankind, create our perception of the universe. What it really is or looks like is undetermined. We can make it whatever we want. The knowledge structure of science is perpetually incomplete and looking backwards in time often wrong but practical contemporarily. Why is that? Small

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread rouncer81
its redudant to how much of it you view at once.  keep probing me, and ull see how much this idiot has thought about it already hehe. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread Stefan Reich via AGI
So... a bigger universe you run the simulator in? On Sun, 27 Oct 2019 at 13:10, wrote: > yeh thats why you need *exponential* qbits. :) > *Artificial General Intelligence List * > / AGI / see discussions + > participants

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread rouncer81
yeh thats why you need *exponential* qbits. :) -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-Ma4e01170251b5dd93f0ff2ae Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread Stefan Reich via AGI
Wait. The universe has quantum mechanics, so it will eat up the exponential computing power you get by using a quantum computer. On Sun, 27 Oct 2019 at 12:05, wrote: > To simulate the universe just takes a quantum computer with exponential > qbits! > *Artificial General Intelligence List

[agi] putting models in your robots head

2019-10-27 Thread rouncer81
If you have the model of something, its as good as knowledge, implicit* models are easier to think about, and take lots of sampling.  For example a physics engine and an accurate geometry of its surroundings and you can get the truths of the environment, sans geometry you cant model. (like gas

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread rouncer81
To simulate the universe just takes a quantum computer with exponential qbits! -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-M6ecc615b214244e5214a141e Delivery options:

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread Stefan Reich via AGI
LOL On Sun, 27 Oct 2019 at 11:30, wrote: > 0 is infinite nothing. > *Artificial General Intelligence List * > / AGI / see discussions + > participants + delivery > options

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread rouncer81
0 is infinite nothing. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-Mb00f94738ed901e93e567331 Delivery options: https://agi.topicbox.com/groups/agi/subscription

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread Stefan Reich via AGI
> The universe has a beginning because you can't CREATE something infinite. Infinity might actually be the origin of the universe. On Sat, 26 Oct 2019 at 19:32, wrote: > "Natural language is shared symbols with commonly agreed upon > approximations transmitted inter-agently for re-rendering

Re: [agi] COMPUTE THIS!!!

2019-10-27 Thread rouncer81
Yes the big bang, is when gods random access memory is all derezzed to 0. :) -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T4f01e8a4b34d0e2a-M3c4b9f4d652eb9c564530860 Delivery options:

Re: [agi] Re: Google - quantum computers are getting close

2019-10-27 Thread rouncer81
Youve put a little electron in a cube!!     That could be a good start,  but you need the rest of the theory. If you end up with a quantum computer - the a.i. is really quick and crafty!  are you sure your ready for how scary its going to be having a "quantum" robot? :)