Here's a link to the docs for rskeymgmt, a command line utility for changing
the key used to access the catalog.
http://msdn2.microsoft.com/en-us/library/aa179504(SQL.80).aspx
You might also need to use the rsactivate, and rsconfig utilities to get
everything working.
Scott Marlowe wrote:
On Sun, Oct 5, 2008 at 7:48 PM, Sean Davis [EMAIL PROTECTED] wrote:
I am looking at the prospect of building a data warehouse of genomic
sequence data. The machine that produces the data adds about
300million rows per month in a central fact table and we will
generally
I am looking at the prospect of building a data warehouse of genomic
sequence data. The machine that produces the data adds about
300million rows per month in a central fact table and we will
generally want the data to be online. We don't need instantaneous
queries, but we would be using the
On Sun, Oct 5, 2008 at 7:48 PM, Sean Davis [EMAIL PROTECTED] wrote:
I am looking at the prospect of building a data warehouse of genomic
sequence data. The machine that produces the data adds about
300million rows per month in a central fact table and we will
generally want the data to be
I am on a Linux platform but I'm going to need some pointers regarding
the cron job. Are you suggesting that I parse the dump file? I assume I
would need to switch to using inserts and then parse the dump looking
for where I need to start from?
Something that you may want to consider is dblink
Hi,
I've got a postgres database collected logged data. This data I have to keep
for at least 3 years. The data in the first instance is being recorded in a
postgres cluster. This then needs to be moved a reports database server for
analysis. Therefore I'd like a job to dump data on the cluster
On 9/3/07, Rob Kirkbride [EMAIL PROTECTED] wrote:
Hi,
I've got a postgres database collected logged data. This data I have to keep
for at least 3 years. The data in the first instance is being recorded in a
postgres cluster. This then needs to be moved a reports database server for
analysis.
On 9/3/07, Rob Kirkbride [EMAIL PROTECTED] wrote:
So basically I need a dump/restore that only appends new
data to the reports server database.
I guess that will all depend on whether or not your
data has a record of the time it got stuck in the cluster
or not ... if there's no concept of a
On 03/09/07, Scott Marlowe [EMAIL PROTECTED] wrote:
On 9/3/07, Rob Kirkbride [EMAIL PROTECTED] wrote:
Hi,
I've got a postgres database collected logged data. This data I have to
keep
for at least 3 years. The data in the first instance is being recorded
in a
postgres cluster. This
On 9/3/07, Rob Kirkbride [EMAIL PROTECTED] wrote:
We're using hibernate to write to the database. Partitioning looks like it
will be too much of a re-architecture. In reply to Andrej we do have a
logged_time entity in the required tables. That being the case how does that
help me with the
Andrej Ricnik-Bay wrote:
On 9/3/07, Rob Kirkbride [EMAIL PROTECTED] wrote:
We're using hibernate to write to the database. Partitioning looks like it
will be too much of a re-architecture. In reply to Andrej we do have a
logged_time entity in the required tables. That being the case how
On 9/3/07, Rob Kirkbride [EMAIL PROTECTED] wrote:
I am on a Linux platform but I'm going to need some pointers regarding
the cron job. Are you suggesting that I parse the dump file? I assume I
would need to switch to using inserts and then parse the dump looking
for where I need to start
Title: Data Warehousing and PostgreSQL
Hello!
I am evaluating PostgreSQL as a database server (with Linux) for a Data Warehousing Project and wondered if you have any experience in
a similar task. Some of questions would be if it's capable of supporting A LOT of heavy queries and big
13 matches
Mail list logo