Hi Si,

On 10 October 2013 07:59, <[email protected]> wrote:
>
> what do you mean by character n-grams? If you mean things like "&ab" or
"ui2" then given that there are so few characters compared to words is
there a problem that can't be solved without a look-up table for n<y (where
y <4ish )
>
> Or are you looking at y >4 ish because if so then do you run into the
issue of a sudden space explosion?
>

Yes, just tokens in a text broken up into sequences of their constituent
characters. In my initial tests, language detection works well where n=3,
particularly when including the head and tail bigrams. So I need something
to generate the required sequence files from my training data.

Reply via email to