Unfortunately 600/100 counts are just from my local machine. The production 
table growing really fast.

I'm afraid that the denormalization is not possible in this case, as 
MemberIntegrationFields are very dynamic.

I'm running more tests in ActiveRecord and it was just a luck that it 
worked there.

:Member.includes(member_integrations: 
[:member_integration_fields]).order('member_integration_fields.id').limit(50).count
 

returns 43. 

Any other idea, how to fetch these members?

Thank you,
Michal



Dne čtvrtek 29. července 2021 v 17:52:48 UTC+2 uživatel Jeremy Evans napsal:

> On Thu, Jul 29, 2021 at 8:20 AM Michal Macejko <[email protected]> 
> wrote:
>
>> I see, sorry about that. I'm trying to put together GraphQL query 
>> resolver for members. So i need a pagination, total count etc. information.
>>
>> nodes = current_account.members_dataset.eager_graph(member_integrations: 
>> [:member_integration_fields]).order(*array_order)
>>
>> nodes.count # returns 600, which is incorrect
>>
>
> This is the count of the total number of rows returned by the dataset.
>  
>
>> nodes.all.count # returns 100, but correct if i'm wrong, it's an array 
>> count 
>>
>
> This is a count of members.  The 600 rows returned are for 100 members, 
> and the eager_graph code handles combining the 600 rows into 100 members.
>  
>
>> nodes.offset(0).limit(50).count # returns 50, but 
>> nodes.offset(0).limit(50).pluck(:id).uniq.count returns 8
>> nodes.offset(0).limit(50).all.count # returns 8
>>
>> So i'm unable to fetch the right count of members + create total count 
>> (not using array count).
>>
>
> I'm guessing you want 50 members.  You cannot use a simple limit for that, 
> as that is a limit on returned rows, not a limit on members.  One way to 
> handle this could be something like:
>
>   nodes.where{|o| o.members[:id] =~ 
> nodes.select{members[:id]}.distinct.from_self.limit(50)}
>  
> Basically, a subselect that makes first gets the 50 members, then the main 
> query does the full load for all 50.  Note that this approach may be slow 
> for large datasets (for 600 total rows it will probably be fine).
>
> A significantly faster approach would be to partially denormalize the 
> dataset, such that the order you want to use is directly accessible in the 
> members table.  With that approach, you could switch from #eager_graph to 
> #eager, and a simple limit would work.
>
> Thanks,
> Jeremy
>

-- 
You received this message because you are subscribed to the Google Groups 
"sequel-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sequel-talk/4e645489-d399-4549-8123-22a1fad59ae1n%40googlegroups.com.

Reply via email to