I'm having the same problem and don't know how to get over it. I'm
using sphinx with postgres on windows.

My problem is a model, that has id '987074798' and higher. Currently
I'm not using any fixtures, but I did in the past, so that probably
the cause for high values in ps sequences for primary keys in most
tables.

Anyway, the funny thing is if I index only this model, then index is
built correctly. But if I index several models, it fails building
index for this model with the following message:
ERROR: index 'section_core': sql_range_query: ERROR:  integer out of
range

After looking at the generated *.sphinx.conf, I think that problem
might be in recalculations of the ID field in the sql_query setting:
'SELECT "sections"."id" * 3 + 1 AS "id"
It seems as this multiplication by 3 is causing the problem, because
when I index only problematic model, I get no errors and index is
built. Altough it is strange because calculated value (987074798 *3)
doesn't exceed 4 bytes for uint.

I must say I'm new to TS in sphinx itself, so any help would be
appreciated!


On Jul 30, 7:56 pm, David <[email protected]> wrote:
> Thanks for the reply Pat.
>
> I finally figured out what's going on. And I think I have a few extra
> gray hairs now too!
>
> Apparently, rails migrations use a column type of "serial" in postgres
> for table ids. These serial fields are 4 bytes (ergo they go from 1 to
> 2147483647). However, loading fixtures picks random numbers like
> 541702176 which is much bigger than 4 bytes and which is why sphinx's
> indexer was flipping out.
>
> I guess the solution would be to either A) use a bigger column type or
> B) don't use fixtures in production (or if you must, give your
> fixtures an id).
>
> I hope that helps anyone else with the same problem that searches
> google.
>
> On Jul 29, 4:57 am, Pat Allan <[email protected]> wrote:
>
> > Yeah, it was the first problem that out-of-the-ordinary... although  
> > I'm surprised the id causing the problem was so small (it's not  
> > anywhere close to hitting the maximum value of a 32-bit integer).  
> > Still, using normal ids is definitely the best way to go.
>
> > Good to know you've got it all sorted.
>
> > --
> > Pat
>
> > On 28/07/2009, at 6:44 PM, David wrote:
>
> > > I actually solved the hanging problem. I properly indexed my tables.
>
> > > I also can solve the first problem by changing the ids to reasonable
> > > numbers. Setting "sql_range_step: 10000000" does not seem to solve the
> > > problem.
>
> > > The id in question is 541702176.
>
> > > Thanks.
>
> > > On Jul 28, 9:46 am, David <[email protected]> wrote:
> > >> Hi guys,
>
> > >> I guess this is not really a thinking sphinx problem but I am having
> > >> sometroubleindexing my tables (when I run "rake ts:index").
>
> > >> The first error I get is:
>
> > >> ERROR: index 'user_core': sql_range_query: ERROR:  integer out of
> > >> range
> > >>  (DSN=pgsql://root:*...@localhost:5432/
> > >> robertson_scholars_development).
>
> > >> The second error I get is that it hangs whenindexinga model:
>
> > >> distributed index 'assignment' can not be directly indexed; skipping.
> > >>indexingindex 'custom_field_type_core'...
>
> > >> It just freezes. There's only 20 records in the table.
>
> > >> I know when I run "index .... --all" I get the same problems so that
> > >> leads me to believe something's up with sphinx and not TS. However,
> > >> any help on how I can debug these errors would be highly appreciated.
>
> > >> Thanks.
>
> > >> David

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Thinking Sphinx" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/thinking-sphinx?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to