Hi,
sorry, I was trapped in a variadic-template-hole. I managed to escape
but not without bringing back a vicious gift. Now everything looks like
a variadic template to me.
On 06/14/2017 04:32 PM, Shujing Ke wrote:
3)GroundedSchemaNode and TypeNode are not considered to become
VariableNodes.
Shujing, just please don't set the confidences inside the patten to 1.0
like in
(ExecutionOutputLink (stv 1.00 1.00)
(GroundedSchemaNode "scm: bc-deduction-formula" (stv 1.00 1.00))
(ListLink (stv 1.00 1.00)
(InheritanceLink (stv 1.00 1.00)
(PatternV
Interesting! I will read it through tomorrow, the rest of my today
seems eaten by other stuff...
I am not surprised that PCA stinks as a classifier...
Regarding "hidden multivariate logistic regression", as you hint at
the end of your document ... it seems you are gradually inching toward
my su
Actually patterns involving scopes require quote links. Let me consider
the following pattern (the simplest of that sort I could find):
;Pattern: Frequency = 6
(ExecutionOutputLink (stv 1.00 1.00)
(GroundedSchemaNode "scm: conditional-full-instantiation-formula"
(stv 1.00 1.00
On Mon, Jun 19, 2017 at 3:31 AM, Ben Goertzel wrote:
>
> Regarding "hidden multivariate logistic regression", as you hint at
> the end of your document ... it seems you are gradually inching toward
> my suggestion of using neural nets here...
>
Maybe. I want to understand the data first, before
Shujing, in
/opencog/learning/PatternMiner/types/atom_types.script
you've defined
PATTERN_LINK <- UNORDERED_LINK
but such a link type already exist in
/opencog/atoms/base/atom_types.script
Nil
On 06/19/2017 12:01 PM, Nil Geisweiller wrote:
Actually patterns involving scopes require quote li
Thanks. The point of this is that we're not using n-grams for anything.
We're using sheaves. So any algo that has "gram" in it's name is
immediately disqualified. The bet is that by doing grammar correctly, using
sheaves, will get you much much better results than using n-grams. And
that's the poi
The python version of Adagram seems not complete and not tested, so I
think I'd rather deal with the Julia implementation at this point
Julia is not that complicated, and I don't love python anyway...
Regarding the deficiencies of n-grams, I agree with Linas. However,
my suggestion is to mod
Hi Linas,
I have read the report now...
Looking at the cosine similarity results, it seems clear the corpus
you're using is way too small for the purpose (there's no good reason
"He" and "There" should have such high cosine similarity..., cf table
on page 6)
Also, cosine similarity is known to b
Hi Ben,
On Mon, Jun 19, 2017 at 9:01 AM, Ben Goertzel wrote:
> Hi Linas,
>
> I have read the report now...
>
> Looking at the cosine similarity results, it seems clear the corpus
> you're using is way too small for the purpose (there's no good reason
> "He" and "There" should have such high cosi
On Tue, Jun 20, 2017 at 12:07 AM, Linas Vepstas wrote:
> So again, this is not where the action is. What we need is accurate,
> high-performance, non-ad-hoc clustering. I guess I'm ready to accept
> agglomerative clustering, if there's nothing else that's simpler, better.
We don't need just cl
(Nil, please look at the end of this email, I have a suggestion for
you there...)
On Wed, Jun 14, 2017 at 9:32 PM, Shujing Ke wrote:
> 3. The interestingness evaluation is different from previous applications
> Our interestingness evalution is based on surpringness measure, which
> includes Sur
OK, well, some quick comments:
-- sparsity is a good thing, not a bad thing. It's one of the big
indicators that we're on the right track: instead of seeing that everything
is like everything else, we're seeing that only one of of every 2^15 or
2^16 possibilities are actually being observed! So
Ben,
On 06/19/2017 07:49 PM, Ben Goertzel wrote:
In the PLN case, if we take an example possible pattern like "two
deductions in a row, involving associated entities, are often useful"
that would look like
A ==> B, B==>C |- A==>C
A==>C, C ==> D |- A ==>D
HebbianLink (D,B)
useful(A==>D)
So the
On 06/19/2017 09:29 PM, Nil Geisweiller wrote:
ImplicationScopeLink
V
Y
useful(X)
where V, X and Y are meta-pattern-matcher variables as they represent
patterns that the pattern miner should come up with (of course all this
should be properly quoted), which looks very much like a Cogn
Again, there's a misunderstanding here. Yes, PCA is not composable, sheaves
are. i'm using sheaves. The reason that I looked at PCA was to use a
thresholded, sparse PCA for CLUSTERING. and NOT similarity where
compositionality does not matter. Its really a completely different
concept, quite totall
On Mon, Jun 19, 2017 at 2:11 PM, Hugo Latapie (hlatapie) wrote:
> *arXiv:1702.00764*
I've just barely started reading that, and from the very beginning, its
eminently clear how even the latest, leading research on deep neural nets
is profoundly ignorant of grammar and semantics. Which I think
On Mon, Jun 19, 2017 at 3:26 PM, Hugo Latapie (hlatapie) wrote:
> Thanks Linus. The approach here does look extremely promising.
>
>
>
> Bridging the gap between these various camps is the holy grail that few
> are even searching for much less attempting to implement.
>
Thanks, But its not that
Hi Enzo,
On Mon, Jun 19, 2017 at 3:49 PM, Enzo Fenoglio (efenogli) <
efeno...@cisco.com> wrote:
>
>
> A “sigmoid-thresholded eigenvector classifier” is just a single layer
> autoencoder with sigmoid activation. That’s equivalent to performing PCA as
> you did. But if you had used a stacked auto
On Tue, Jun 20, 2017 at 2:29 AM, Nil Geisweiller
wrote:
> What do you mean exactly by "useful(A==>D)"?
What I was thinking was: If the implication [666], e.g.
ImplicationLink [handle=666]
EvaluationLink
PredicateNode "eat"
ListLink
ConceptNode "Ben"
On Tue, Jun 20, 2017 at 5:59 AM, Linas Vepstas wrote:
>> , and see how your grammar+semantic approach will be effective (adding
>> somehow a non-linear embedding in the phase space as I already discussed
>> with Ben)
>
>
> Ben has not yet relayed this to me.
>
> -- Linas
Yeah, it seemed you were
21 matches
Mail list logo