I'm missing most of this thread, but it seems like you would want to do a table dump of your sqlLite database. If there's a way to do a table dump of sorts, then I would create a script that took each database and named each table a database specific version of itself and then wrote it's contents into a new database that you'd query until the cows come home.
It might make a non-SQL scripting person's life easier if you used the SQLite ODBC driver and connected to your databases using a tool like Access, but that implies a certain level of expertise either way.
So your 2 approaches might be:
1)
Look up SQLite's docs to see if there's a way to export the data into a sane format.
if there is, write a script that will parse the data and can calculate a primary key based on the environment name (or some unique variable you
define)
run an alter table on all your tables so that a new key is added that allows you to tell which environment the data came from.
Have your script do a UNION table or some sort of INSERT that concatenates all the data together into one uber table.
crappy explanations let me give an example.
I have env foo and env bar.
foo has a specific trac.db
bar has a different trac.db
create a new trac.db called ubertrac.db
export data from foo's trac.db into ubertrac.db
alter tables so that there is a new key added to all the tables, call it: InstanceID
update the data in ubertrac.db so that InstanceID = foo
Repeat the exercise, but this time update your exported data so that InstanceID = bar
========
So that seems like a real pain.
An easier way would be to see if you can connect to your databases with Access. SQLite has an ODBC driver, so you should be able to do this provided you can get access to the files on a Windows machine. Then you just create a single Access database - import the tables from multiple Trac instances. I think to create some of your aggregate reports you'll still either need to add an InstanceID or do a lot of clever union queries, but that should get you the ability to run reports across your system as well. Because trac.db is a file based database system (not unlike Access), you would need to be careful about locking up the database. I would recommend exporting the files and doing your analysis away from the live data.
======
Thank you for listening to my hackneyed ideas.
Vincent
David Abrahams <[EMAIL PROTECTED]> wrote:
David Abrahams <[EMAIL PROTECTED]> wrote:
John Hampton <[EMAIL PROTECTED]>
writes:
> David Abrahams wrote:
>> And if I have existing Tracs in databases, can I move them into
>> PostgreSQL schemas?
>
> Depends on how good your SQL-foo is.
How shall I put this?... I think "nonexistent" would probably only be
a slight exaggeration.
> If they are already in PostgreSQL, it's fairly trivial.
I'm afraid I'm using SQLite
> I'd probably suggest doing a pg_dump into a plain text format. Then
> you can edit the dump file and add something like the following
> (untested):
>
> create schema bob;
> set search_path to bob;
>
> at the top, and that should do it.
Hmm, do what?
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com
_______________________________________________
Trac mailing list
[email protected]
http://lists.edgewall.com/mailman/listinfo/trac
Yahoo! Messenger with Voice. Make PC-to-Phone Calls to the US (and 30+ countries) for 2ยข/min or less.
_______________________________________________ Trac mailing list [email protected] http://lists.edgewall.com/mailman/listinfo/trac
