wHi Roman,
if the text has more than 2 or 3 constructions of the form
"is a quantity of/is made of chalk from source"
then it is easier to "fix this by hand" not in the final output, but in the
middle of the script: look for all patterns that start with "is" and end
in "of", and peel off everyt
Linas if i understand your code correctly it would do this.
x1 is a quantity of/is made of chalk from source x2 in form x3.
=> split on X
["x1", "is a quantity of/is made of chalk from source" , "x2" , "in form" ,
"x3"]
=> split on / and distribute
["x1", "is a quantity of" , "x2" , "in form" ,
This is still very easy: modify the script to split on the x's, instead of
splitting on whitespace, and only then split on the slashes.
I assume the x's really are the letter x. If the x's are just some strings
of random words, then the problem is not really solvable deterministically
by any algo
Hmm... yeah for cases like
On Fri, Nov 18, 2016 at 5:21 PM, Roman Treutlein wrote:
> x1 is a quantity of/is made of chalk from source x2 in form x3.
either you either need to get pretty fancy or fix them by hand...
I mean, in this case "is a quantity of" happens to start with the same
word as "
Maybe I should have given more examples. Because while your script might
work for this example it won't work for this:
*x1 comes/goes to destination x2 from origin x3 via route x4 using
means/vehicle x5.*
This would at least have the advantage of the alternatives consisting of
only 1 word, but
>From a discussion I had with Roman, I thought he wanted a lexical function.
>But, I might have misunderstood.
--
You received this message because you are subscribed to the Google Groups
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to openco
On Thu, Nov 17, 2016 at 4:55 PM, Ben Goertzel wrote:
>
>
> > x1 utters verbally/says/phonates/speaks x2.
>
> for each Lojban word So he is simply facing the small programming
> task of translating these definitions into sets of English sentences,
> i.e. in the above example
>
> > Now I would
On Thu, Nov 17, 2016 at 4:57 PM, Ben Goertzel wrote:
>
>
> Note that if we changed to a different link grammar dictionary (e.g.
> one that was learned by unsupervised learning, hint hint)
Yeah, I've recently started laying a plan to restart that.
--linas
--
You received this message because
On Fri, Nov 18, 2016 at 7:55 AM, Ben Goertzel wrote:
> I think I know how to use this to make a replacement for RelEx2Logic,
> by using a parallel English-Lojban corpus, and then using the pattern
> miner to find patterns from the set of pairs of the form
>
> (link parser output for English sente
Hi Linas,
Roman already has built a parser (in Haskell) that maps Lojban
sentences into Atomese structures
I think I know how to use this to make a replacement for RelEx2Logic,
by using a parallel English-Lojban corpus, and then using the pattern
miner to find patterns from the set of pairs of th
Not sure I understand the question.
You can use WordNet to look up synonymous words/phrases. it has perl,
python, java and other APIs.
NLTK is in python only but its a huge swiss-army knife of tools.
Perhaps you want to create a lojban parser? There are probably
off-the-shelf solutions for this,
11 matches
Mail list logo