This excellent work on parsing is very well presented, both here on the AGI
mail-list and on the cited weblog, so congratulations are in order.

In the Mentifex AI Minds, thinking in English, German, Russian or ancient
Latin, the parser mind-modules started out as modules for detecting the
part of speech (noun, verb, etc.) .but shifted over time into dealing
mainly with the functionality of a noun or a verb as a word whose part of
speech is either already known or is easily determined -- when ambiguous.
Since German, Russian and Latin are highly inflected languages, it became
the main work of the parser to determine things like first, second or third
person for verbs and subject or object for nouns. The Latin LaParser
actually looks more than once  at an input noun or verb because the true
functionality may not be known until the full sentence comes in, or perhaps
the full phrase in a compound sentence.

http://ai.neocities.org/EnParser.html -- is the English parser.

http://ai.neocities.org/LaParser.html -- is the Latin parser.

http://ai.neocities.org/RuParser.html -- is the Russian parser.

Mentifex (MakerOfMinds)


On Sun, Feb 20, 2022 at 2:12 PM WriterOfMinds <jennifer.hane....@gmail.com>
wrote:

> I've continued to quietly labor away at my text-based AI project, and
> since I made a pretty large upgrade to the Text Parser recently, I thought
> I'd share some results. I will happily admit that this is still very weak
> and I have a long way to go! But at least it's recognizing all the parts of
> speech now.
>
> I benchmark my parser by making it try to parse sentences from "early
> reader" children's books (which are actually still quite difficult). The
> test script compares the data structure produced by the parser to a
> "golden" example supplied by me, which encodes the correct sentence
> structure. Diagrams are generated from each to yield a quick visual
> comparison. Results look like this:
>
> The current stats ("unparseable" sentences contain grammar structures that
> the parser simply does not support yet - gerund phrases, for example):
>
> *Inside the Earth*
>
> Early Trials
> July 2021
> February 2022
>
> CRASHED
> 0
> 0
> 0
>
> UNPARSEABLE
> 37
> 37
> 30
>
> INCORRECT
> 27
> 21
> 23
>
> CORRECT
> 34
> 40
> 45
>
>
> 98
>
>
>
>
>
>
>
>
>
> Early Trials
> July 2021
> February 2022
>
> CRASHED
> 0
> 0
> 0
>
> UNPARSEABLE
> 37.76%
> 37.76%
> 30.61%
>
> INCORRECT
> 27.55%
> 21.43%
> 23.47%
>
> CORRECT
> 34.69%
> 40.82%
> 45.92%
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Out of the Dark*
>
> Early Trials
> July 2021
> February 2022
>
> CRASHED
> 0
> 0
> 0
>
> UNPARSEABLE
> 48
> 48
> 38
>
> INCORRECT
> 27
> 19
> 21
>
> CORRECT
> 38
> 46
> 54
>
>
> 113
>
>
>
>
>
>
>
>
>
> Early Trials
> July 2021
> February 2022
>
> CRASHED
> 0
> 0
> 0
>
> UNPARSEABLE
> 42.48%
> 42.48%
> 33.63%
>
> INCORRECT
> 23.89%
> 16.81%
> 18.58%
>
> CORRECT
> 33.63%
> 40.71%
> 47.79%
>
> Drop by my blog if you'd like to see more example pictures, or download
> any test data:
> https://writerofminds.blogspot.com/2022/02/acuitas-diary-46-february-2022.html
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T40b4adac09b570fb-Md147744d02dc7be93649091d>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T40b4adac09b570fb-Mee6040e4f35379f0471fa4a3
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to