Em 04-07-2012 21:19, Jeremy Evans escreveu:
On Wednesday, July 4, 2012 4:12:42 PM UTC-7, Rodrigo Rosenfeld Rosas wrote:

    Em 04-07-2012 19:49, Jeremy Evans escreveu:
    On Wednesday, July 4, 2012 2:39:11 PM UTC-7, Rodrigo Rosenfeld
    Rosas wrote:

        There is a critical part in my application where I need to
        generate the SQL by myself.

        It would be easier to process the results of this generated
        dynamic SQL if I could iterate over each row by index.

        I mean, instead of having DB[sql].all to return something
        like [{tid: 23, tname: 'Some name'}, ...] I'd prefer to get
        something like: [[23, "Some name"], [...], ...].

        Currently I'm doing something like below, but I'd like to
        know if there is some method that already does that and that
        I'm not aware of:

        builder = QueryBuilder.new(params)
        json = DB[builder.sql].map do |r|
          r = r.map{|k, v| v} # this is the trick I'm currently using
          builder.columns.map do |c|
            raw = r.shift
            case c[:type]
            when 'range' then [raw, r.shift]
            ...
            else raw
          end
        end


    You can provide an argument to map:

      ds = DB[builder.sql]
      ds.map(ds.columns)

    Thanks for your support once more, Jeremy.

    The problem is that my columns array doesn't contain the alias
    information. It is something like [{field_id: 687, type:
    'string'}, {field_id: 934, type: 'time-span'}, ...]. Even though
    all my columns are aliased in the generated query, they shouldn't
    be. I don't rely on their alias for anything but readability. It
    wouldn't be hard to include the alias in the columns array but it
    wouldn't be trivial either. And I'd need to add more complexity
    and tests to my QueryBuilder just to work around a Sequel limitation.


ds.columns should should an array of the correct alias symbols (did you try it?). If you do DB['SELECT 1 AS a, 2 as b'].columns you should get [:a, :b].

Ah, sorry. I overlooked your response and I thought you were talking about my previous column variable :)

I'll give it a try soon, thanks :)

Note that Dataset#columns may require a query if it call it before getting results from the dataset. You might want to keep your current code and do this instead of r.map:

  r = r.values_at(*ds.columns)

Another great trick I wasn't aware of. I'll probably stick with you prior suggestion but this is also a good one :)

That way you aren't relying on a specific order in the hash.

    Your current code (r = r.map{|k, v| v}) is not guaranteed to be
    portable across adapters (as adapters make no guarantee that hash
    entry order is the same as column order), and certainly is
    unlikely to work on ruby 1.8 because hashes aren't ordered there.

    Yeah, I was aware of that. I'm not worried about 1.8 because I
    don't use it for several years now. But I wasn't aware that some
    adapters wouldn't fill the hash in the same order of the columns
    though.


Well, most of them probably do, since that's the simplest way in most cases. However, I wouldn't write code that relies on it.

Specially when it seems so simple like you pointed out in your previous examples :)

Thanks a lot! Much appreciated!

Cheers,
Rodrigo.

--
You received this message because you are subscribed to the Google Groups 
"sequel-talk" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sequel-talk?hl=en.

Reply via email to