Hello everyone, and thank you so much for your effort with sequel.

While benchmarking in-memory processing of big datasets (100K - 200K rows) 
in active-record and sequel
we've noticed surprisingly good results of sql_query gem, which does 
nothing but invoking ActiveRecord::Base.connection.execute(SQL).entries,
which suggest that deserialization process is suboptimal (for big datasets 
at least) in both sequel and activerecord, which is fairly surprising
since creating 1_000_000 ruby objects doesn't seem that expensive (even 
with exception of Date.new);

With increase of resulting dataset 
"ActiveRecord::Base.connection.execute(SQL).entries" demonstrates fairly 
small cost of results processing, while both ORMs degrade, when used for 
resulting object instantiation:

"

Benchmark.measure {DB[:orders].limit(100000).map{|i| i[:id]}}

D, [2017-04-11T00:07:11.743980 #38269] DEBUG -- : (1.468975s) SELECT * FROM 
"orders" LIMIT 100000

=> #<Benchmark::Tms:0x007fc061756768 @label="", @real=13.32571900000039, 
@cstime=0.0, @cutime=0.0, @stime=0.47000000000000597, 
@utime=11.819999999999993, @total=12.29>
"

1) With that in mind - could someone please express an opinion on reasons 
of potential performance loss in such case?
2) While jeremyevans/simple_orm_benchmark is farly good illustration of 
'sequel' superiority, I wonder if anyone 
explored ORM performance in terms of detailed cost of 
networking/parsing/result object allocation?

With regards,
Andriy Tyurnikov

-- 
You received this message because you are subscribed to the Google Groups 
"sequel-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/sequel-talk.
For more options, visit https://groups.google.com/d/optout.

Reply via email to