First – thanks a lot, Jeremy, for your prompt answer! I love and really
appreciate it when a maintainer has this much dedication for their
project.

Jeremy Evans:

> On Jul 13, 3:06 am, Piotr Szotkowski <[email protected]> wrote:

>> I’m on my honeymoon and have very sporadic net access
>> – hence can’t google as much as I’d want to. :|

> I recommend a reevaluation of your priorities. :)

Yeah, I did; it turns out that I get rather sad when I don’t create
anything for a few days in a row. Also, between my daily PHP work and
my nightly Ruby PhD coding I don’t have much (well, any) time to code
stuff for fun lately, and they told me honeymoons are all about having
fun… :)

> This behavior is by design. Models use the database connection you
> have defined to get the columns/schema for the model's table/ dataset,
> so the database connection needs to exist first.

Ah, I see; thanks for the clarification. Is this a common ‘feature’
of models across different ORMs (Sequel/DataMapper/ActiveRecord), or
is it something Sequel-specific?

(I don’t plan to switch ORMs, I’m asking out of curiosity. I’m also
considering dropping models altogether and just using other Sequel
features instead, as I have only two core tables in my database and
don’t foresee more for quite some time, so if the price to have the
models specable is to have ugly hacks, then it might be more elegant
to just go without models.)

> You can set which database to use by default:

>   Sequel::Model.db = ...

Hm, that might be useful. Is there a way to redefine once-defined
models? Something like an unset() call for classes, followed by
re-evaluation of their source files, perhaps?

Hm, maybe something to the point of

  def Singore.model_foo_redefiner(database)
    Class.new(Sequel::Model(database[:foos])) do
      many_to_many :bars
      # …model definition…
    end
  end

  def Signore.model_bar_redefiner(database)
    Class.new(Sequel::Model(database[:bars])) do
      many_to_many :foos
      # …model definition…
    end
  end

  # …some code that (re)sets the database…

  Foo = Signore.model_foo_redefiner(database)
  Bar = Signore.model_bar_redefiner(database)

would work? Would the many_to_many calls work in this case?

> I'd recommend just requiring your models after setting up your
> database connections, and making sure each model uses the correct
> database so you don't have to switch the databases afterward.

My problem isn’t with the way the final app runs (this will surely run
off of a single db), my problem is with my specs – I want some of them
to work against an example file-based database, and some against fresh,
in-memory db (and maybe some others testing some edge-case databases).

(I already hit an issue with Sequel::Migrator only migrating the first
database Sequel connected to, but it turned out to be a class naming
problem combined with module scope, and I overcame it by using anonymous
classes for migrations.)

> If you are thinking about switching a model's database during the
> running of the app, I advise you to reconsider. If you really want
> to do that, create a subclass for each model for each database.

Hm. I’d rather not make my code more complicated (especially this much)
just for the sake of it being RSpec-testable; I doubt that’s one of the
cases where testability improves the app, especially if the app itself
will always use just a single database. :]

I know I ask for a lot (and do feel free to skip this request), but is
there a sane (and simple) solution for my case? In other words – how
should I spec stuff that uses models, but also uses different databases
for different specs? (The databases have the same schemas, they just
differ when it comes to the data they store.)

> If you find the Amalgalite adapter is a bit slow, you may want to try
> the SQLite adapter. I know that the SQLite adapter is much faster when
> running the specs.

Thanks for the tip! I try to limit the dependencies of my final app
(epseicially that it’s supposed to be small/simple), and I got the
notion that Amalgalite is a ‘lighter’ dependency than requiring the
existence of a full SQLite install. But I think I should support both
in the end, so I should just connect via what’s on a given system and
use that (and then could simply prefer SQLite to speed up the testing).

Ah, one more question, if I may – it seems using Amalgalite (haven’t
tried SQLite) to simply read a database changes the database’s file.
I tried to make the file read-only, but then I hit a Ruby segmentation
fault in amalgalite-0.10.0/lib/amalgalite/statement.rb’s line 79.

I can only assume that’s it’s a bug in Amalgalite 0.10.0 (or its
combination with Ruby 1.9.1-p129), but before I’ll report it there
I’d rather get a clarification: neither Amalgalite nor SQLite should
break when accessing a read-only (on the filesystem level) database,
especially if it’s done just for reading, right? (They obviously
shouldn’t segfault, but I’m asking whether SQLite requires something
to change the file on every read, or should SELECTs from a read-only
file ‘just work’.)

Once again: thanks a ton for your great work on Sequel and the
prompt reply – and apologies for the longish ramblings above!

— Piotr Szotkowski
-- 
Triskadekaphobiacs are presumably upset that, although
13.untrust appears to succeed and reinforce their beliefs,
the state doesn’t stick; 13.untrusted? subsequently returns
false. Similarly, logicians and cynics alike must, respectively,
rejoice and despair that true.untrust.untrusted? is perpetually
false.                            [Run Paint Run Run, ruby-core]

Attachment: signature.asc
Description: Digital signature

Reply via email to