On Jul 14, 4:52 am, Piotr Szotkowski <[email protected]> wrote:
> > This behavior is by design. Models use the database connection you
> > have defined to get the columns/schema for the model's table/ dataset,
> > so the database connection needs to exist first.
>
> Ah, I see; thanks for the clarification. Is this a common ‘feature’
> of models across different ORMs (Sequel/DataMapper/ActiveRecord), or
> is it something Sequel-specific?

I'm not sure if DM or AR require a database connection first.  I doubt
DM would because it doesn't query anything from the database (you
specify the properties in the models).  I'm not sure about AR, but it
wouldn't surprise me.

> (I don’t plan to switch ORMs, I’m asking out of curiosity. I’m also
> considering dropping models altogether and just using other Sequel
> features instead, as I have only two core tables in my database and
> don’t foresee more for quite some time, so if the price to have the
> models specable is to have ugly hacks, then it might be more elegant
> to just go without models.)

Is there a reason you can't just require the model code after setting
up your database connection?  That's what I do in all of my apps and
specs.

To answer your question directly, models aren't required, though I
think they make writing most apps easier.

> > You can set which database to use by default:
> >   Sequel::Model.db = ...
>
> Hm, that might be useful. Is there a way to redefine once-defined
> models? Something like an unset() call for classes, followed by
> re-evaluation of their source files, perhaps?

Model.db= and Model.dataset= can be called after the model is defined
to change the database/dataset it uses.  I don't recommend that
approach, though.  It certainly provides no advantage over just
setting them up correctly to begin with.

Also, the standard ruby way of Object.send(:remove_const, :Model)
followed by a redefinition of the model class should work.

> Hm, maybe something to the point of
>
>   def Singore.model_foo_redefiner(database)
>     Class.new(Sequel::Model(database[:foos])) do
>       many_to_many :bars
>       # …model definition…
>     end
>   end
>
>   def Signore.model_bar_redefiner(database)
>     Class.new(Sequel::Model(database[:bars])) do
>       many_to_many :foos
>       # …model definition…
>     end
>   end
>
>   # …some code that (re)sets the database…
>
>   Foo = Signore.model_foo_redefiner(database)
>   Bar = Signore.model_bar_redefiner(database)
>
> would work? Would the many_to_many calls work in this case?

You will probably have to set the :class option to many_to_many to
point it to the correct anonymous subclass.  I'm not sure why you
would want to do redefine the models to use different databases on the
fly, though.

> > I'd recommend just requiring your models after setting up your
> > database connections, and making sure each model uses the correct
> > database so you don't have to switch the databases afterward.
>
> My problem isn’t with the way the final app runs (this will surely run
> off of a single db), my problem is with my specs – I want some of them
> to work against an example file-based database, and some against fresh,
> in-memory db (and maybe some others testing some edge-case databases).

Ah, that makes more sense.  If you really have to do this, I'd
separate your specs and run them separately.  Alternatively, try to
get them all to run out of the same database.  Is there a requirement
that you use separate databases when testing?

My general testing philosophy is to have the database empty before and
after each spec, using a separate transaction for each spec. This
ensures that specs don't depend on each other.

> (I already hit an issue with Sequel::Migrator only migrating the first
> database Sequel connected to, but it turned out to be a class naming
> problem combined with module scope, and I overcame it by using anonymous
> classes for migrations.)

Using anonymous classes for migrations is my recommendation, and
that's what the schema dumper extension creates.

> > If you are thinking about switching a model's database during the
> > running of the app, I advise you to reconsider. If you really want
> > to do that, create a subclass for each model for each database.
>
> Hm. I’d rather not make my code more complicated (especially this much)
> just for the sake of it being RSpec-testable; I doubt that’s one of the
> cases where testability improves the app, especially if the app itself
> will always use just a single database. :]

Certainly not.  From the sounds of it, you shouldn't be making changes
to your model classes, but changes to your specs.

> I know I ask for a lot (and do feel free to skip this request), but is
> there a sane (and simple) solution for my case? In other words – how
> should I spec stuff that uses models, but also uses different databases
> for different specs? (The databases have the same schemas, they just
> differ when it comes to the data they store.)

As mentioned above, if at all possible use a single database for
specs.  If not, try to split the specs up and run them separately.
Alternatively, you should be able to do Model.db= to reset the
database for the models.

> > If you find the Amalgalite adapter is a bit slow, you may want to try
> > the SQLite adapter. I know that the SQLite adapter is much faster when
> > running the specs.
>
> Thanks for the tip! I try to limit the dependencies of my final app
> (epseicially that it’s supposed to be small/simple), and I got the
> notion that Amalgalite is a ‘lighter’ dependency than requiring the
> existence of a full SQLite install. But I think I should support both
> in the end, so I should just connect via what’s on a given system and
> use that (and then could simply prefer SQLite to speed up the testing).
>
> Ah, one more question, if I may – it seems using Amalgalite (haven’t
> tried SQLite) to simply read a database changes the database’s file.
> I tried to make the file read-only, but then I hit a Ruby segmentation
> fault in amalgalite-0.10.0/lib/amalgalite/statement.rb’s line 79.

Amalgalite 0.10 doesn't seem as stable as 0.9 or 0.8.  I get
segmentation faults when running the integration tests with Amalgalite
0.10 (and didn't get them on 0.8 or 0.9), so there may be a bug in the
extension.

> I can only assume that’s it’s a bug in Amalgalite 0.10.0 (or its
> combination with Ruby 1.9.1-p129), but before I’ll report it there
> I’d rather get a clarification: neither Amalgalite nor SQLite should
> break when accessing a read-only (on the filesystem level) database,
> especially if it’s done just for reading, right? (They obviously
> shouldn’t segfault, but I’m asking whether SQLite requires something
> to change the file on every read, or should SELECTs from a read-only
> file ‘just work’.)

It definitely sounds like an extension bug, though it could possible
be an interpreter bug that the extension triggers.

> Once again: thanks a ton for your great work on Sequel and the
> prompt reply – and apologies for the longish ramblings above!

You're welcome.  Have fun on your honeymoon!

Jeremy
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sequel-talk" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/sequel-talk?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to