Hi,
If you don't care to much how the data look like - just run once of update and
replace it with some random stings. If you do however, and you still want to
telephone numbers looks like telephone numbers, what I can suggest is to use a
data generator like this one
http://www.generatedata.com
.
That's why data masking tool required.
-Original Message-
From: "Jigal van Hemert"
Sent: Wednesday, 16 April, 2014 11:56am
To: [email protected]
Subject: Re: Data masking for mysql
Hi,
On 15-4-2014 18:42, Peter Brawley wrote:
On 2014-04-15 5:37 AM, reena.kam...@jktech
From: "Jigal van Hemert"
Sent: Wednesday, 16 April, 2014 11:56am
To: [email protected]
Subject: Re: Data masking for mysql
Hi,
On 15-4-2014 18:42, Peter Brawley wrote:
> On 2014-04-15 5:37 AM, [email protected] wrote:
>> It can be done by data masking tool itself.
Hi,
On 15-4-2014 18:42, Peter Brawley wrote:
On 2014-04-15 5:37 AM, [email protected] wrote:
It can be done by data masking tool itself. Its one time activity, I
do not need it again & again.
Rilly? If that's so, the data will never be accessed.
I'm starting to think that a concept ha
jktech.com, "[email protected]"
Subject: Re: Data masking for mysql
On 2014-04-15 5:37 AM, [email protected] wrote:
> It can be done by data masking tool itself. Its one time activity, I do not
> need it again & again.
Rilly? If that's so, the data will never be ac
sage-
From: "Reindl Harald"
Sent: Tuesday, 15 April, 2014 2:49pm
To: [email protected]
Subject: Re: Data masking for mysql
Am 15.04.2014 11:08, schrieb [email protected]:
Yes, we can do it at application level and database level as well.
for example mobile no. is 987841587
Hi,
On 15-4-2014 12:36, [email protected] wrote:
Actually data masking is a one time activity, so I need data masking tool.
I do not need it again & again.
So you basically want to replace the data with modified data. You can do
that with an update query [1]. There are all kinds of func
to do this - that's what
sysadmins get paied for
how do you expect ready tool matching exact a single one-time-need?
> -Original Message-
> From: "Reindl Harald"
> Sent: Tuesday, 15 April, 2014 2:49pm
> To: [email protected]
> Subject: Re: Data masking for
Actually data masking is a one time activity, so I need data masking tool.
I do not need it again & again.
-Original Message-
From: "Jigal van Hemert"
Sent: Tuesday, 15 April, 2014 3:43pm
To: [email protected]
Subject: Re: Data masking for mysql
Hi,
On 15-4-2014 11
It can be done by data masking tool itself. Its one time activity, I do not
need it again & again. Please suggest data masking tool link.
-Original Message-
From: "Reindl Harald"
Sent: Tuesday, 15 April, 2014 2:49pm
To: [email protected]
Subject: Re: Data masking f
Hi,
On 15-4-2014 11:03, [email protected] wrote:
The main reason for applying masking to a data field is to protect
data from external exposure. for example mobile no. is 9878415877,
digits can by shuffle(8987148577) or can replace with other
letter/number(first 6 digits replace with X-- x
Am 15.04.2014 11:08, schrieb [email protected]:
> Yes, we can do it at application level and database level as well.
> for example mobile no. is 9878415877, digits can by shuffle(8987148577) or
> can replace with other letter/number(first 6 digits replace with X--
> xx5877) by using d
[email protected]
Cc: "[email protected]"
Subject: Re: Data masking for mysql
2014-04-15 8:52 GMT+02:00 :
> Hi,
>
> I need to do data masking to sensitive data exists in mysql db. is there
> any data masking tool available for mysql with linux platform.
>
data
masking technique to protect our sensitive data from external exposure.
I need a tool which will mask data in existing mysql db.
-Original Message-
From: "Reindl Harald"
Sent: Tuesday, 15 April, 2014 1:51pm
To: [email protected]
Subject: Re: Data masking for mysql
Am
2014-04-15 8:52 GMT+02:00 :
> Hi,
>
> I need to do data masking to sensitive data exists in mysql db. is there
> any data masking tool available for mysql with linux platform.
> if yes... please provide the links.
> else... please suggest other alternatives for this requirement.
>
> I look forward
Am 15.04.2014 08:52, schrieb [email protected]:
> I need to do data masking to sensitive data exists in mysql db. is there any
> data masking tool available for mysql with linux platform.
> if yes... please provide the links.
> else... please suggest other alternatives for this requirement
Hello Zachary,
On 2/26/2013 4:42 PM, Zachary Stern wrote:
Any idea what can cause this? Can it be misconfiguration? Could it be
because I'm messing with MySQL's memory usage so much, or the thread
settings?
My config is linked at the bottom, for reference. I might be doing
something terribly du
Am 26.02.2013 22:16, schrieb Zachary Stern:
> It's InnoDB but it's not just about the number of rows.
>
> The data literally isn't there. It's like rows at the end are being
> dropped. We have a frontend that queries and shows the data, but it ends up
> missing results.
>
> I've asked the devs
e.com]
Sent: Tuesday, February 26, 2013 5:11 PM
To: [email protected]
Subject: Re: data loss due to misconfiguration
Hello Zachary,
On 2/26/2013 4:42 PM, Zachary Stern wrote:
> Any idea what can cause this? Can it be misconfiguration? Could it be
> because I'm messing with My
Any idea what can cause this? Can it be misconfiguration? Could it be
because I'm messing with MySQL's memory usage so much, or the thread
settings?
My config is linked at the bottom, for reference. I might be doing
something terribly dumb.
The stuff I've done under "# * Fine Tuning" worries me t
quot; in row count as SHOW TABLE STATUS
> for InnoDB.
>
> > -Original Message-
> > From: Stillman, Benjamin [mailto:[email protected]]
> > Sent: Tuesday, February 26, 2013 11:04 AM
> > To: Zachary Stern; [email protected]
> > Subject: RE: dat
ands.com]
> Sent: Tuesday, February 26, 2013 11:04 AM
> To: Zachary Stern; [email protected]
> Subject: RE: data loss due to misconfiguration
>
> Are you actually querying the table (select count(*) from table_name),
> or just the stats (show table status)? Is the table Innodb
Are you actually querying the table (select count(*) from table_name), or just
the stats (show table status)? Is the table Innodb?
If you're using Innodb and aren't doing a select count (or other select query)
on the table, then yes you'll have varying results. This is because unlike
MyISAM, I
- Original Message -
> From: "Yu Watanabe"
>
> So, which memory corresponds to 'pages' for the MyISAM then?
> It would be helpful if you can help me with this.
None, as Reindl said. This is not a memory issue, it's a function of I/O
optimization. Records are stored in pages, and pages a
you should try to understand what pages are
your data + keys + fragmentation overhead if
deleted records are the size of the files
Am 24.11.2011 10:46, schrieb Yu Watanabe:
> Hi Reindl.
>
> Thanks for the reply.
>
> So, which memory corresponds to 'pages' for the MyISAM then?
> It would be help
Hi Reindl.
Thanks for the reply.
So, which memory corresponds to 'pages' for the MyISAM then?
It would be helpful if you can help me with this.
Thanks,
Yu
Reindl Harald さんは書きました:
>key buffer is memory and has nothing to do with file sizes
>filesize increeases by data and keys
>key buffer is as
key buffer is memory and has nothing to do with file sizes
filesize increeases by data and keys
key buffer is as the name says a memory-buffer for kyes
Am 24.11.2011 10:25, schrieb Yu Watanabe:
> Hello Johan.
>
> Thank you for the reply.
> I see. So it will depend on the key buffer size.
>
> Tha
Hello Johan.
Thank you for the reply.
I see. So it will depend on the key buffer size.
Thanks,
Yu
Johan De Meersman さんは書きました:
>- Original Message -
>> From: "Yu Watanabe"
>>
>> It seems that MYD is the data file but this file size seems to be not
>> increasing after the insert sql.
>
>
At 02:45 AM 11/23/2011, you wrote:
Also,
since MySQL 5.1 MyISAM has an algorythm to detect if you are going to
delete a row without ever reading it,
so when you insert it, it will use the blackhole storage engine instead.
:O (NB: it is a joke)
Claudio
Claudio,
I have been using the bl
Also,
since MySQL 5.1 MyISAM has an algorythm to detect if you are going to
delete a row without ever reading it,
so when you insert it, it will use the blackhole storage engine instead.
:O (NB: it is a joke)
Claudio
2011/11/23 Johan De Meersman
> - Original Message -
> > From: "Yu
- Original Message -
> From: "Yu Watanabe"
>
> It seems that MYD is the data file but this file size seems to be not
> increasing after the insert sql.
That's right, it's an L-space based engine; all the data that has, is and will
ever be created is already in there, so storage never in
e benefit.
I stand corrected. Still, as you've noticed, don't change the design of an
existing application without thoroughly testing the consequences :-p
- Original Message -
> From: [email protected]
> To: [email protected]
> Sent: Tuesday, 14 June, 2011 7:34
On Jun 7, 2011, at 10:43 PM, Johan De Meersman wrote:
> Where did you find the advice about setting columns NOT NULL?
It took me awhile, but I just found it again, in case anyone is
interested:
http://dev.mysql.com/doc/refman/5.0/en/data-size.html
7.8.1. Make Your Data as Small as Pos
>> If that's all you did, you indeed 'removed the default NULL' but did not
>specify another default. Hence, if you don't explicitly specify a value in
>your
>insert statement, the insert can not happen as the server doesn't know what
>to
>put there and is explicitly disallowed from leaving the
On Jun 7, 2011, at 10:43 PM, Johan De Meersman wrote:
> If that's all you did, you indeed 'removed the default NULL' but did not
> specify another default. Hence, if you don't explicitly specify a value in
> your insert statement, the insert can not happen as the server doesn't know
> what to p
- Original Message -
> From: [email protected]
>
> Yes. That's all I did.
If that's all you did, you indeed 'removed the default NULL' but did not
specify another default. Hence, if you don't explicitly specify a value in your
insert statement, the insert can not happen as the serv
> On Jun 6, 2011, at 10:06 PM, Johan De Meersman wrote:
> > What exactly do you mean by "removing the NULL default"? Did you set
> > your colums NOT NULL?
>
> Yes. That's all I did.
In stead of getting info drop-by-drop, you might want to share the output of
SHOW CREATE TABLE...,, but my
On Jun 6, 2011, at 10:06 PM, Johan De Meersman wrote:
> What exactly do you mean by "removing the NULL default"? Did you set your
> colums NOT NULL?
Yes. That's all I did.
Marc
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://list
- Original Message -
> From: [email protected]
>
> description? Why would removing the NULL default cause data to be
> lost?
What exactly do you mean by "removing the NULL default"? Did you set your
colums NOT NULL?
--
Bier met grenadyn
Is als mosterd by den wyn
Sy d
On Jan 26, 2011, at 5:11 PM, [email protected] wrote:
> "Hal Vaughan" wrote:
>> I'm having the strangest issue. I am using a Perl program to test out some
>> other Perl programs and all the Perl connections with MySQL are "normal", as
>> in I use the standard interface. But in the test progra
"Hal Vaughan" wrote:
> I'm having the strangest issue. I am using a Perl program to test out some
> other Perl programs and all the Perl connections with MySQL are "normal", as
> in I use the standard interface. But in the test program I'm just using this:
>
[... cut ...]
Transaction isolati
A very first thought I got is disable the constraint before import and
re-enable after that.
One way could be to set the foreign key checks to false or alter the
constraint and remove the 'cascade delete' part.
It's just a quick brain storm, please verify the goodness of it, I still
need to get my
You can use this structure with MyISAM tables. It will work fine
except you won't have the advantage of database-level enforcement of
foreign key constraints--do it with code.
Or use InnoDB tables (enable/load the innobase plugin.)
--
MySQL General Mailing List
For list archives: http://li
f foreign
key constraints--do it with code.
PB
ChoiSaehoon wrote:
Thanks, PB.
Date: Fri, 27 Mar 2009 09:54:47 -0500
From: [email protected]
To: [email protected]
CC: [email protected]
Subject: Re: Data structure for matching for company data
Choi
Thanks for the resource! Arthur.
SIC seems to be great for most industries, but not for high-tech industries.
(e.g. it doesn't have Internet or software etc) Still a great tip, though.
Thanks again! :)
Date: Fri, 27 Mar 2009 19:13:51 -0500
Subject: Re: Data structure for matchin
ts.mysql.com
> Subject: Re: Data structure for matching for company data
>
> Choi
>
> >1. company (3 cols) - company id(pk), company name
> >2. industry (3 cols) - industry id(pk), industry, sub-industry
> >3. matching table (3 cols?) - match id(pk), company id(fk), indust
My esteemed friend, partner and co-author has laid it out perfectly for you.
Just follow the instructions table-wise.
One thing that may not be obvious from Peter's prescription is that you need
to enter a bunch of rows into the industry table first, so that the foreign
keys will make sense in the
Choi
1. company (3 cols) - company id(pk), company name
2. industry (3 cols) - industry id(pk), industry, sub-industry
3. matching table (3 cols?) - match id(pk), company id(fk), industry id(fk)...?
Yes, you've got it. In the matching (usually called "bridging") table,
any company or industr
Hi John,
Actaually, after doing root cause analysis. I got where is the problem.
mysql-5.1.30 (server C) runs replication in two mode namely STRICT and
IDEMPOTANT. Both of these mode is catching the problem.
I believe replicaton has been enhanced on mysql version 5.1.30 . When ever
any update is
I think maybe in the default sql_mode 5.0 is more forgiving when it comes
to accepting invalid values, quietly converting them to the nearest
acceptable value and giving a warning whereas 5.1 gives an error.
Personally i would rather have the data rejected and an error returned
because if MySQL i
Yes, sql_mode is blank on all server A, B, C
On Wed, Jan 21, 2009 at 8:40 PM, John Daisley <
[email protected]> wrote:
> Is the sql_mode set the same on A/B/C?
>
> >
> > Why are A and B letting you cram NULL into a column declared NOT NULL?
> >
> >
> >
> > Are your schemas consisten
Is the sql_mode set the same on A/B/C?
>
> Why are A and B letting you cram NULL into a column declared NOT NULL?
>
>
>
> Are your schemas consistent on A/B/C?
>
>
>
> Perhaps 5.0.32 does not enforce NOT NULL properly?
>
> Some tweak to config may change this?
>
>
>
> I don't know the answer, but
Why are A and B letting you cram NULL into a column declared NOT NULL?
Are your schemas consistent on A/B/C?
Perhaps 5.0.32 does not enforce NOT NULL properly?
Some tweak to config may change this?
I don't know the answer, but with a bit of research in this direction, you
should be ther
ERROR: not null column cannot be updated with null value. This error is
catched by server C mysql 5.1.30 but not my server B mysql 5.0.32
In production we have three servers.
A> B -C
A is replicating to B. B is replicating to C
A mysql-5.0.32 (Write)
B
What error is shown by 'show slave status\G' on server C after you
issue that query?
There's all sorts of things that could break replication...
On Tue, Jan 20, 2009 at 7:21 AM, Krishna Chandra Prajapati
wrote:
> Hi Baron,
>
> In production we have three servers.
>
> A> B ---
> Would data files from 4.1.13 work with 5.0.x or will I have to use an SQL
> dump?
Well, not to worry, I managed to start 4.1.13 and got an SQL dump. Cheers.
--
Richard Heyes
http://www.phpguru.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
Thanks a lot
On Tue, May 13, 2008 at 4:30 PM, Ben Clewett <[EMAIL PROTECTED]> wrote:
> Table level locking is inherent to MyIsam.
>
> Look into partitioning, as this breaks table into two or more other tables
> which will lock separately.
>
> Or use InnoDB:
>
> ALTER TABLE ... SET ENGINE=InnoDB;
Table level locking is inherent to MyIsam.
Look into partitioning, as this breaks table into two or more other
tables which will lock separately.
Or use InnoDB:
ALTER TABLE ... SET ENGINE=InnoDB; (I think)
Ben
Krishna Chandra Prajapati wrote:
Hi,
I am looking for a solution to reduce th
Hi,
I am looking for a solution to reduce the table locks as mytop shows that
table gets locked very frequently. During report generation.
Thanks,
Prajapati
On Tue, May 13, 2008 at 1:10 PM, Ananda Kumar <[EMAIL PROTECTED]> wrote:
> what is the reason for creating main_dup. If your thinking of t
Hi,
MyISAM is being used on production server. It applies table level locking.
>From mytop view, I see that the table gets locked very frequently for 5 to
10 seconds. Reports are generated everyday. so it scans billions of data (
1years data). Changing to innodb will be doing soon and optimising q
If you use InnoDB you should not have a problem as it used row-level
locking and isolated transitions.
Other than that you can split your tables into smaller ones using either
partitioning or the federated engine...
Ben
Krishna Chandra Prajapati wrote:
Hi,
Generally, in data modelling t
what is the reason for creating main_dup. If your thinking of taking a
backup of all the changes from main table, then the trigger also will have
to wait, till the locks on the main table are released.
This trigger would add another feather to the lock/load on the machine.
regards
anandkl
On 5
ATA INFILE statement; => no truncation warnings
c) ALTER TABLE tmp CONVERT TO CHARACTER SET UTF8;
Thanks again and a nice weekend.
Cor
- Original Message -
From: "Chris W" <[EMAIL PROTECTED]>
To: "MYSQL General List"
Sent: Friday, April 18, 2008 8:38 PM
S
, and stop and start mysql, the errors remain.
The Data.txt file (from an external source) looks okay with Wordpad.
TIA, Cor
- Original Message - From: "Jerry Schwartz"
<[EMAIL PROTECTED]>
To: "'C.R.Vegelin'" <[EMAIL PROTECTED]>;
Sent: Friday
t; <[EMAIL PROTECTED]>
To: "'C.R.Vegelin'" <[EMAIL PROTECTED]>;
Sent: Friday, April 18, 2008 2:30 PM
Subject: RE: data truncation warnings by special characters
>-Original Message-
From: C.R.Vegelin [mailto:[EMAIL PROTECTED]
Sent: Friday, April 18, 2
>-Original Message-
>From: C.R.Vegelin [mailto:[EMAIL PROTECTED]
>Sent: Friday, April 18, 2008 8:42 AM
>To: [email protected]
>Subject: data truncation warnings by special characters
>
>Hi List,
>
>I get strange "Data truncated for column Description" warnings
>when loading a tab separa
>-Original Message-
>From: C.R.Vegelin [mailto:[EMAIL PROTECTED]
>Sent: Friday, April 18, 2008 8:42 AM
>To: [email protected]
>Subject: data truncation warnings by special characters
>
>Hi List,
>
>I get strange "Data truncated for column Description" warnings
>when loading a tab separa
Development server has multiple databases.
On Fri, Apr 11, 2008 at 3:07 PM, Ananda Kumar <[EMAIL PROTECTED]> wrote:
> does your development server have only one database or multiple database.
>
> regards
> anandkl
>
>
> On 4/11/08, Krishna Chandra Prajapati <[EMAIL PROTECTED]> wrote:
> >
> > Hi,
HI
Option 1
If you are using PHP, you can do this very simply using CRON task.
Make a field for to update either your first operation is successful (eg.
Update the field value to 1 ). Run a CRON job in particular interval if the
field updated as 1 then call your second operation query and make
HI
Option 1
If you are using PHP, you can do this very simply using CRON task.
Make a field for to update either your first operation is successful (eg.
Update the field value to 1 ). Run a CRON job in particular interval if the
field updated as 1 then call your second operation query and make t
does your development server have only one database or multiple database.
regards
anandkl
On 4/11/08, Krishna Chandra Prajapati <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> What ever queries are executed on 5 mysql server with multiple database
> (more than one database on each mysql server). I have t
Hey Baron,
Your blog post was quite informative; your suggestion to use a
combination of merged MyISAM tables and InnoDB for the live partition
made a lot of sense, and it sounds like the path I'll need to follow.
I appreciate the information!
Baron Schwartz wrote:
Hi,
I'll just address th
Hi,
I'll just address the things others didn't answer.
On Thu, Apr 3, 2008 at 2:28 PM, Dre <[EMAIL PROTECTED]> wrote:
> 1) Several sources seem to suggest MyISAM is a good choice for data
> warehousing, but due to my lack of experience in a transaction-less world,
One approach you might conside
On Thu, Apr 3, 2008 at 2:28 PM, Dre <[EMAIL PROTECTED]> wrote:
> 1) Several sources seem to suggest MyISAM is a good choice for data
> warehousing, but due to my lack of experience in a transaction-less world,
> this makes me a little nervous.
MyISAM has the advantage of very fast loading. It's
I've built several datamarts using perl and MySQL. The largest ones
have been up to about 30GB, so I'm not quite on your scale.
for #1, I have an etl_id in the fact table so I can track back any
particular ETL job. I typically make it a dimension and include date,
time, software version, etc. That
Hi Martin,
You are correct. That's the same error that I got.
Looks like this article is the solution =>
http://dev.mysql.com/doc/refman/5.1/en/innodb-backup.html
Thanks !
Feris
On 1/31/08, Martijn Tonies <[EMAIL PROTECTED]> wrote:
> Hi,
>
> > > By default, InnoDB tables aren't stored in the d
Hi Rick,
Thanks... I think I found the answer from your direction. This article
seems the solution to my problem :
http://dev.mysql.com/doc/refman/5.1/en/innodb-backup.html
Thanks !
Feris
On 1/31/08, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> moving an innodb table is trickier than moving
Hi,
> > By default, InnoDB tables aren't stored in the database folder, but
rather
> > in it's own table space files.
> >
>
> In fact when I try to drop the database, the server "recognizes"
> innodb tables. For example, T1 and T2 are INNODB tables in database
> DB1. Then when I try to drop DB1, i
Hi Martin,
On 1/31/08, Martijn Tonies <[EMAIL PROTECTED]> wrote:
> Hi,
>
> By default, InnoDB tables aren't stored in the database folder, but rather
> in it's own table space files.
>
In fact when I try to drop the database, the server "recognizes"
innodb tables. For example, T1 and T2 are INNOD
Hi,
> I have 2 database folder that being copied directly from a remote
> server and sent to me. That databases contains both MYISAM and INNODB
> tables.
>
> After I received the data, I try to restored it by copying that
> folders to my server. The problem is, only MYISAM tables are being
> recog
On a dual boot it should work okay. I've done a similar thing, by taking
the data folder from a Linux installation, copying it to a local windows
computer and using a local install (same version of course) to read it.
It worked fine. I would think the scenario is much the same as what
you're su
er into MySQL's partitioning.
Cheers
- Andrew
-Original Message-
From: Jochem van Dieten [mailto:[EMAIL PROTECTED]
Sent: Friday, 27 July 2007 6:44 PM
To: [email protected]
Subject: Re: Data Warehousing and MySQL vs PostgreSQL
On 7/26/07, Andrew Armstrong wrote:
> * Table 1: 8
On 7/26/07, Andrew Armstrong wrote:
> * Table 1: 80,000,000 rows - 9.5 GB
> * Table 2: 1,000,000,000 rows - 8.9 GB
> This is a generic star schema design for data warehousing.
> I have read that it is better if perhaps partitioning is implemented, where
> new data is added to a partiti
007 10:23 AM
To: Andrew Armstrong
Cc: 'Wallace Reis'; [email protected]
Subject: Re: Data Warehousing and MySQL vs PostgreSQL
Wallace is right, Data Warehousing shouldn't delete any data. MySQL
isn't as robust as say, Oracle, for partitioning so you need to fudge
thing
after a week, etc.
I'm more concerned as to why inserts begin to slow down so much due
to the
large table size.
-Original Message-
From: Wallace Reis [mailto:[EMAIL PROTECTED]
Sent: Friday, 27 July 2007 1:02 AM
To: Andrew Armstrong
Cc: [email protected]
Subject: Re: Data War
I'm more concerned as to why inserts begin to slow down so much due to the
large table size.
-Original Message-
From: Wallace Reis [mailto:[EMAIL PROTECTED]
Sent: Friday, 27 July 2007 1:02 AM
To: Andrew Armstrong
Cc: [email protected]
Subject: Re: Data Warehousing and MySQL vs
On 7/26/07, Andrew Armstrong <[EMAIL PROTECTED]> wrote:
Do you have a suggestion to how this should be implemented?
Data is aggregated over time and summary rows are created.
I think that you didnt design correctly your DW.
It should have just one very larger table (the fact table).
Data should
Do you have a suggestion to how this should be implemented?
Data is aggregated over time and summary rows are created.
-Original Message-
From: Wallace Reis [mailto:[EMAIL PROTECTED]
Sent: Thursday, 26 July 2007 8:43 PM
To: Andrew Armstrong
Cc: [email protected]
Subject: Re: Data
On 7/26/07, Andrew Armstrong <[EMAIL PROTECTED]> wrote:
Information is deleted from this DW as well, after every five minutes.
The data being recorded is time sensitive. As data ages, it may be deleted.
Groups of samples are aggregated into a summary/aggregation sample prior to
being deleted.
I
: Re: Data Warehousing and MySQL vs PostgreSQL
On Thu, 2007-07-26 at 18:37 +1000, Andrew Armstrong wrote:
> Hello,
>
>
>
> I am seeking information on best practices with regards to Data
Warehousing
> and MySQL. I am considering moving to PostgreSQL.
> * Table 1: 80,
On Thu, 2007-07-26 at 18:37 +1000, Andrew Armstrong wrote:
> Hello,
>
>
>
> I am seeking information on best practices with regards to Data Warehousing
> and MySQL. I am considering moving to PostgreSQL.
> * Table 1: 80,000,000 rows - 9.5 GB
> * Table 2: 1,000,000,000 rows - 8.9 GB
Ju
Hi !
paulizaz wrote:
What do you mean by "same output" ?
I have too much data to go through and check if all the data is the same.
This is my problem. Sampling would speed this up, but I need something more
accurate.
All data is important.
In Unix / Linux, you would generate similar plain-te
On 6/4/07 12:31 PM, "paulizaz" <[EMAIL PROTECTED]> wrote:
I have too much data to go through and check if all the data is the same.
This is my problem. Sampling would speed this up, but I need something more
accurate.
All data is important.
Then I think you will also have to write a reverse mi
I don't mean the whole thing.
Pick some output that your applications usually produce and see if you can
get the same results for both databases.
I am not saying that this is the only and best way, just in addition to the
mentioned sample approach.
If you want to know for sure you will have to wr
On Mon, June 4, 2007 9:31, paulizaz said:
>
> What do you mean by "same output" ?
Can you write a program to access both databases and have it check to see
if the data matches. A lot depends on how the structure changed. If the
new database rows have a one to one correspondence to the original
da
What do you mean by "same output" ?
I have too much data to go through and check if all the data is the same.
This is my problem. Sampling would speed this up, but I need something more
accurate.
All data is important.
Olaf Stein-2 wrote:
>
> Besides the sample approach, output data (a set y
Besides the sample approach, output data (a set you would output on a live
system anyway) from both db setups and see if you can get the same output
from both
Olaf
On 6/1/07 10:35 AM, "paulizaz" <[EMAIL PROTECTED]> wrote:
>
> Hi all,
>
> I have somebody creating a C# class to migrate data fr
You could write a little script that loops through your lines in the csv
file, makes the changes to fields you need and insert into the database
then.
This gives you full control over the new table structure (order,types, etc)
Olaf
On 5/31/07 12:02 AM, "David Scott" <[EMAIL PROTECTED]> wrote:
>
On 5/15/07, Ratheesh K J <[EMAIL PROTECTED]> wrote:
Hello all,
I have a requirement of maintaining some secret information in the
database. And this information should not be visible/accessible to any other
person but the owner of the data.
Whilst I know that encryption/decryption is the soluti
encrypted data, since at the very least the programmer needs access to it so it can be presented to
the user.
- Original Message -
From: "Ratheesh K J" <[EMAIL PROTECTED]>
To:
Cc: "Chris" <[EMAIL PROTECTED]>
Sent: Tuesday, May 15, 2007 5:19 AM
Subject:
1 - 100 of 328 matches
Mail list logo