One of the issues I face in developing the Alexa skills for LMS is that
I have to search your library -textually- based on the speech-to-text of
what Alexa thinks she -heard-. This can be problematic with
similar-sounding words like pair/pear, hear/here, knot/not, ate/eight,
flour/flower, etc. I can only fuzzy match / post-process the returned
values if the searched term is among the search results.  Of course a
search for a song with -flower- in the title will not return a song with
-flour- - end of story. Asking Alexa again will not help because she
will hear the same thing every time and return the same search-for
text.

A potential workaround for this would be to have a plugin that extends
the scanning process to add columns to the database that contain
computed *Double Metaphone* (DM) values for tags. There is a CPAN lib
for DM and all it does is produce a text mapping from a word to
something that sounds like the word according to pre-defined rules. Some
examples:

Flour, flower -> FLR
Not, Knot -> NT
Hear, Here -> HR
Ate, Eight -> AT

So the song title 'True Colors' would become TRKL, but so would Tru
Colors, True Colours and even Trew Cullers. This means that I can match
based on *sound* of the title rather than spelling and therefore more
easily find stuff in your library. With the added benefit that this
would work in the GUI interface too and help you find stuff with
misspelled tags.

Just curious if anybody ever experimented with this kind of thing
before? Would the increased DB size and time-to-rescan be an issue at
all?


------------------------------------------------------------------------
philchillbill's Profile: http://forums.slimdevices.com/member.php?userid=68920
View this thread: http://forums.slimdevices.com/showthread.php?t=112531

_______________________________________________
discuss mailing list
discuss@lists.slimdevices.com
http://lists.slimdevices.com/mailman/listinfo/discuss

Reply via email to