I had everything working quite nicely last fall. Then a hard drive failed on my 
hardware RAID5 setup, then the controller failed to notice that the drive had 
failed, and thus didn't bother bringing the hot spare on line. THEN a second 
drive failed. I had a backup of the database inside Postgres on a different 
machine; I did a quick dump of the data, then rebuilt the main server. When I 
went to transfer the database from the secondary machine back to the primary, 
the secondary Postgres had spontaneously combusted, and the data *inside* 
Postgres was irrecoverable. 

Which is a very long-winded way of saying I had to reconstruct my schema by 
hand (my notes weren't completely up to date). Looking at my data dump, I 
rebuilt the schema, and reloaded the data.

So now I'm getting this puzzling error when I try to access some of my tables: 
"ArgumentError: time out of range" I assume it's because the table has a 
timestamp field that contains "0999-01-01 00:00:00", and Time can't handle that 
date. What's puzzling is that this was working just fine before the server 
meltdown. 

Either I changed something in my schema that is confusing Sequel into using 
"Time" for that column, or Sequel was updated since my previous installation, 
and the newer version is different from the older one in some way that's 
causing this. I could (well, I think I can) tell Sequel explicitly to use 
DateTime for this column, but I'm at least as curious to figure out why I 
didn't have to do that before. 

Can anybody provide me a clue or two?



-- 
You received this message because you are subscribed to the Google Groups 
"sequel-talk" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sequel-talk?hl=en.

Reply via email to