On Mon, Jun 15, 2015 at 9:04 AM, YKY (Yan King Yin, 甄景贤)
<[email protected]> wrote:
>
> On Mon, Jun 15, 2015 at 1:00 AM, Matt Mahoney <[email protected]> wrote:
>>
>> On Sat, Jun 13, 2015 at 12:52 AM, YKY (Yan King Yin, 甄景贤)
>> <[email protected]> wrote:
>> > But here comes a problem:  if we have 3 propositions, say
>> >   P1 = yesterday rained
>> >   P2 = Obama is president of US
>> >    P3 = the moon is made of cheese
>> > and if there exists a linear dependence among them, say:
>> >    a3 P3 = a1 P1 + a2 P2
>> > where a1, a2, a3 are scalars, that seems to create a relation between 
>> > apparently unrelated sentences, and would lead to error.
>>
>> That's unlikely to happen in normal semantic spaces with tens of
>> thousands of dimensions.
>
> I found out that a "distributive representation"
>
> does not come with superposition (I don't recall where I got that idea from).
>
> For example, 100 neurons which take only binary (0,1) values can represent 
> maximally 2^100 different "states".  This is vastly bigger than the number of 
> states for a completely local representation, which would be 100.

A language model has about 10^9 bits of information. This is the
number of bits of compressed speech and text that you can input in a
lifetime. A neural representation would therefore need about 10^9
synapses to represent it. You will need at least 10^(9/2) = 30K fully
connected neurons, or a larger number in a sparsely connected network.
This is enough that each neuron can represent one word or a group of
related words.

-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to