Em 04-07-2012 21:25, Rodrigo Rosenfeld Rosas escreveu:
Em 04-07-2012 21:19, Jeremy Evans escreveu:
On Wednesday, July 4, 2012 4:12:42 PM UTC-7, Rodrigo Rosenfeld Rosas
wrote:
Em 04-07-2012 19:49, Jeremy Evans escreveu:
On Wednesday, July 4, 2012 2:39:11 PM UTC-7, Rodrigo Rosenfeld
Rosas wrote:
There is a critical part in my application where I need to
generate the SQL by myself.
It would be easier to process the results of this generated
dynamic SQL if I could iterate over each row by index.
I mean, instead of having DB[sql].all to return something
like [{tid: 23, tname: 'Some name'}, ...] I'd prefer to get
something like: [[23, "Some name"], [...], ...].
Currently I'm doing something like below, but I'd like to
know if there is some method that already does that and that
I'm not aware of:
builder = QueryBuilder.new(params)
json = DB[builder.sql].map do |r|
r = r.map{|k, v| v} # this is the trick I'm currently using
builder.columns.map do |c|
raw = r.shift
case c[:type]
when 'range' then [raw, r.shift]
...
else raw
end
end
You can provide an argument to map:
ds = DB[builder.sql]
ds.map(ds.columns)
Thanks for your support once more, Jeremy.
The problem is that my columns array doesn't contain the alias
information. It is something like [{field_id: 687, type:
'string'}, {field_id: 934, type: 'time-span'}, ...]. Even though
all my columns are aliased in the generated query, they shouldn't
be. I don't rely on their alias for anything but readability. It
wouldn't be hard to include the alias in the columns array but it
wouldn't be trivial either. And I'd need to add more complexity
and tests to my QueryBuilder just to work around a Sequel limitation.
ds.columns should should an array of the correct alias symbols (did
you try it?). If you do DB['SELECT 1 AS a, 2 as b'].columns you
should get [:a, :b].
Ah, sorry. I overlooked your response and I thought you were talking
about my previous column variable :)
I'll give it a try soon, thanks :)
Note that Dataset#columns may require a query if it call it before
getting results from the dataset. You might want to keep your
current code and do this instead of r.map:
r = r.values_at(*ds.columns)
Both solutions worked here but I guess this one is better because if I
understood correctly the first one would iterate over all records twice,
right? I shouldn't have that many records so that the difference in
performance would be noticeable, but this makes me feel better :P
Thanks :)
--
You received this message because you are subscribed to the Google Groups
"sequel-talk" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/sequel-talk?hl=en.