Re: Add record number to timestamped router data to facilitate cross join

2010-10-05 Thread Jake Peavy
On Fri, Oct 1, 2010 at 12:24 PM, Jake Peavy djstu...@gmail.com wrote:

 All,

 I have a number of routers which report in with various stats
 periodicially.  This period is not a regular interval and can drift based on
 other factors.  Each router drifts independently.  The stats the routers
 provide need to be analyzed in terms of deltas between reports (rather than
 the absolute number).  Therefore I need to perform a cross join to compare
 the rows for a given routerID (something like SELECT r1.timestamp,
 r2.counter1-r1.counter1 FROM router_data as r1 JOIN router_data as r2 ON
 (r2.recordNum = r1.recordNum + 1))

 Here's an example of some raw data to give you an idea showing 3 records
 each from 2 devices:

 +--+--+--+--+--+
 | routerID | timestamp| counter1 | counter2 | counter3 |
 +--+--+--+--+--+
 |1 | 24/08/2010 10:36 | 40   | 55   | 70   |
 |2 | 24/08/2010 10:51 | 31   | 79   | 29   |
 |2 | 24/08/2010 12:19 | 94   | 61   | 64   |
 |1 | 24/08/2010 12:41 | 4| 84   | 82   |
 |1 | 24/08/2010 14:58 | 26   | 9| 62   |
 |2 | 24/08/2010 14:51 | 36   | 75   | 31   |
 +--+--+--+--+--+

 My plan, to facilitate the cross join, was to add a per-device record
 number like follows:


 +--+--+---+--+--+--+
 | routerID | timestamp| recordNum | counter1 | counter2 | counter3
 |

 +--+--+---+--+--+--+
 |1 | 24/08/2010 10:36 | 1 | 40   | 55   | 70
 |
 |2 | 24/08/2010 10:51 | 1 | 31   | 79   | 29
 |
 |2 | 24/08/2010 12:19 | 2 | 94   | 61   | 64
 |
 |1 | 24/08/2010 12:41 | 2 | 4| 84   | 82
 |
 |1 | 24/08/2010 14:58 | 3 | 26   | 9| 62
 |
 |2 | 24/08/2010 14:51 | 3 | 36   | 75   | 31
 |

 +--+--+---+--+--+--+

 So here's my question, first, of course, is there a better way to perform
 the cross join?  If not, what's the easiest way to add and increment the
 recordNum field?  Can I do it directly in SQL somehow?  Or do I need to do
 it in my parser?  If I do it in my parser, it runs periodically (as it
 receives the reports) so I think it would need to figure out what record
 number it assigned to which device last so it would know where to restart
 the numbering.  Should I hold that in the parser itself, or a separate table
 (SELECT routerID,last_used_record_num FROM last_used_record_nums; then
 parse, incrementing record num, then write the last ones back to that table)
 or by querying the main table itself (SELECT routerID, MAX(recordNum) FROM
 router_data GROUP BY routerID)?  My only concern with the last approach is
 that router_data is going to get very large and that query may get very
 slow.

 TIA for any advice,



Hey all,

Anyone have any thoughts/advice on the best way to manage this record
number?  What about a stored procedure?  Or use of variables?

TIA

-- 
-jp

If you're traveling in a time machine, and you're eating corn on the cob, I
don't think it's going to affect things one way or the other. But here's the
point I'm trying to make: Corn on the cob is good, isn't it?

deepthoughtsbyjackhandey.com


Add record number to timestamped router data to facilitate cross join

2010-10-01 Thread Jake Peavy
All,

I have a number of routers which report in with various stats
periodicially.  This period is not a regular interval and can drift based on
other factors.  Each router drifts independently.  The stats the routers
provide need to be analyzed in terms of deltas between reports (rather than
the absolute number).  Therefore I need to perform a cross join to compare
the rows for a given routerID (something like SELECT r1.timestamp,
r2.counter1-r1.counter1 FROM router_data as r1 JOIN router_data as r2 ON
(r2.recordNum = r1.recordNum + 1))

Here's an example of some raw data to give you an idea showing 3 records
each from 2 devices:

+--+--+--+--+--+
| routerID | timestamp| counter1 | counter2 | counter3 |
+--+--+--+--+--+
|1 | 24/08/2010 10:36 | 40   | 55   | 70   |
|2 | 24/08/2010 10:51 | 31   | 79   | 29   |
|2 | 24/08/2010 12:19 | 94   | 61   | 64   |
|1 | 24/08/2010 12:41 | 4| 84   | 82   |
|1 | 24/08/2010 14:58 | 26   | 9| 62   |
|2 | 24/08/2010 14:51 | 36   | 75   | 31   |
+--+--+--+--+--+

My plan, to facilitate the cross join, was to add a per-device record
number like follows:

+--+--+---+--+--+--+
| routerID | timestamp| recordNum | counter1 | counter2 | counter3 |
+--+--+---+--+--+--+
|1 | 24/08/2010 10:36 | 1 | 40   | 55   | 70   |
|2 | 24/08/2010 10:51 | 1 | 31   | 79   | 29   |
|2 | 24/08/2010 12:19 | 2 | 94   | 61   | 64   |
|1 | 24/08/2010 12:41 | 2 | 4| 84   | 82   |
|1 | 24/08/2010 14:58 | 3 | 26   | 9| 62   |
|2 | 24/08/2010 14:51 | 3 | 36   | 75   | 31   |
+--+--+---+--+--+--+

So here's my question, first, of course, is there a better way to perform
the cross join?  If not, what's the easiest way to add and increment the
recordNum field?  Can I do it directly in SQL somehow?  Or do I need to do
it in my parser?  If I do it in my parser, it runs periodically (as it
receives the reports) so I think it would need to figure out what record
number it assigned to which device last so it would know where to restart
the numbering.  Should I hold that in the parser itself, or a separate table
(SELECT routerID,last_used_record_num FROM last_used_record_nums; then
parse, incrementing record num, then write the last ones back to that table)
or by querying the main table itself (SELECT routerID, MAX(recordNum) FROM
router_data GROUP BY routerID)?  My only concern with the last approach is
that router_data is going to get very large and that query may get very
slow.

TIA for any advice,
Tks,

-- 
-jp

I wish everybody would have to have an electric thing implanted in our heads
that gave us a shock whenever we did something to disobey the president.
Then somehow I get myself elected president.

deepthoughtsbyjackhandey.com


Re: too many connections

2008-09-19 Thread Jake Peavy
On 9/19/08, Kinney, Gail [EMAIL PROTECTED] wrote:

 Hello,  We have MySQL 4.0.14 and have just gotten an error:


 Please help.


Answer the door, 2004 is calling.

-- 
-jp

I hope that someday we will be able to put away our fears and prejudices and
just laugh at people.

deepthoughtsbyjackhandy.com


Re: Tracking changes in large datasets over time

2008-07-20 Thread Jake Peavy
On 7/19/08, Rob Wultsch [EMAIL PROTECTED] wrote:

 On Sat, Jul 19, 2008 at 6:54 AM, Jake Peavy [EMAIL PROTECTED] wrote:
  I may be wrong, but I think you could accomplish this through the use of
  triggers.  Triggers are designed to monitor data change activity.
 
  -jp


 Good thought. Using triggers to keep track of changes in an
 intermediary tables allows for good performance using the approach
 shown below. I think that should work well, or at least be a good
 start...


I guess this is where we make assumptions about the OPs schema and details
of his methodology.  My assumption is that this is simpler than your
solution.

The users table currently exists and is seeded with 500,000 users.  I would
use a straight select to baseline the user_changelog, then install the
trigger on users and let it do all the work from then on.  The only question
is what happens when a new user is created - a second INSERT trigger might
be required.

Of course you could also store the pre-UPDATE point balance.

mysql DROP TABLE IF EXISTS users,user_changelog;
Query OK, 0 rows affected, 2 warnings (0.00 sec)

mysql DROP TRIGGER IF EXISTS user_trigger;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql CREATE TABLE users (userid INT PRIMARY KEY
AUTO_INCREMENT,point_balance INT DEFAULT 0);
Query OK, 0 rows affected (0.06 sec)

mysql INSERT INTO users VALUES
(1,10*RAND()),(2,10*RAND()),(3,10*RAND()),(4,10*RAND()),(5,10*RAND());
Query OK, 5 rows affected (0.03 sec)
Records: 5  Duplicates: 0  Warnings: 0

mysql CREATE TABLE user_changelog (userid INT,point_balance INT,update_time
TIMESTAMP DEFAULT CURRENT_TIMESTAMP, INDEX id (userid));
Query OK, 0 rows affected (0.04 sec)

mysql INSERT INTO user_changelog (userid,point_balance) SELECT * FROM
users;
Query OK, 5 rows affected (0.03 sec)
Records: 5  Duplicates: 0  Warnings: 0

mysql CREATE TRIGGER user_trigger AFTER UPDATE ON users FOR EACH ROW INSERT
INTO user_changelog(userid,point_balance) VALUES
(NEW.userid,NEW.point_balance);
Query OK, 0 rows affected (0.01 sec)

mysql SELECT SLEEP(15);
+---+
| SLEEP(15) |
+---+
| 0 |
+---+
1 row in set (15.03 sec)

mysql UPDATE users SET point_balance=11 WHERE userid=1;
Query OK, 1 row affected (0.02 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql SELECT * FROM users;
++---+
| userid | point_balance |
++---+
|  1 |11 |
|  2 | 7 |
|  3 | 6 |
|  4 | 8 |
|  5 | 0 |
++---+
5 rows in set (0.00 sec)

mysql SELECT * FROM user_changelog;
++---+-+
| userid | point_balance | update_time |
++---+-+
|  1 | 7 | 2008-07-20 09:30:57 |
|  2 | 7 | 2008-07-20 09:30:57 |
|  3 | 6 | 2008-07-20 09:30:57 |
|  4 | 8 | 2008-07-20 09:30:57 |
|  5 | 0 | 2008-07-20 09:30:57 |
|  1 |11 | 2008-07-20 09:31:12 |
++---+-+
6 rows in set (0.00 sec)

Anyway, this should be enough to get the OP going.

-jp

-- 
I hope that someday we will be able to put away our fears and prejudices and
just laugh at people.

deepthoughtsbyjackhandy.com


Re: Tracking changes in large datasets over time

2008-07-19 Thread Jake Peavy
On 7/18/08, Jason Yergeau [EMAIL PROTECTED] wrote:

 Hi Rob ---

 MySQL 5.



 On Fri, Jul 18, 2008 at 3:01 PM, Rob Wultsch [EMAIL PROTECTED] wrote:
  On Fri, Jul 18, 2008 at 2:00 PM, Jason Yergeau [EMAIL PROTECTED]
 wrote:
  I'm having trouble working through a data problem.  Any tips or
  keywords that might clue me into a known pattern would be incredibly
  appreciated!
 
  I have about 500,000 users in my system.  Each user has a points
  balance, updated by a system over which I have no control.  I'd like
  to track changes to each user's point balance over time by taking
  timestamped snapshots of their balance, and saving it into a new
  table.
 
  It's easy to take the snapshot:
 
  insert into balances (userid, points) select userid, points from users;
 
  This quickly takes the points field from my users table, and saves it
  into a balances table, which saves the data along with a timestamp.  I
  can run that query on a regular basis without overly taxing my system.
 
  The first time its run, I get 500,000 rows of data.  That's fine.  But
  the next time I run a query, I only want to save the differences in
  balance.
 
  Anyone have any tips?
 
  Best,
  Jason
 
  What version of mysql do you need this to work with?



I may be wrong, but I think you could accomplish this through the use of
triggers.  Triggers are designed to monitor data change activity.

-jp

-- 
Broken promises don't upset me. I just think, why did they believe me?

deepthoughtsbyjackhandy.com


Re: Incorrect results from sum

2008-04-05 Thread Jake Peavy
On 4/5/08, Jonathan Mangin [EMAIL PROTECTED] wrote:

 I'm getting incorrect results from a sum and wonder if
 anyone sees something obviously wrong. (Won't surprise
 me.) Leaving 'simple' out of the equation (or adding
 'simple' values manually) gets me the correct number.

 $menu is a personalized table of meal/recipe ingredients.
 itemized is a list of ingredients consumed in each meal.
 simple is a table of nutrient totals for each meal.

 (I want the user to be able to choose simple or
 itemized storage at any time.)

 $menu.carb is decimal(8,3)
 simple.carb is decimal(4,1)

 select round(sum($menu.carb * units) + simple.carb,2)
 from itemized inner join simple on itemized.uid = simple.uid
 inner join $menu on itemized.personal_id = $menu.id
 where itemized.uid = ? and itemized.date between ? and ?
 group by date;

 It's supposed to return 253.08, but I keep getting 260.36.


Jonathan, I think it would behoove you to read How To Ask Questions The
Smart Way.

Pay particular attention to

   - Be Precise
   http://catb.org/%7Eesr/faqs/smart-questions.html#beprecise
   - and Don't Claim You Have Found A
Bughttp://catb.org/%7Eesr/faqs/smart-questions.html#id306810

In this case, I believe you should post example table definitions, sample
data, and the exact query which replicates the issue.  Don't use your actual
tables - create the smallest, simplest sample tables which still experience
the issue.

Your bug is likely in the GROUP BY.

-- 
-jp


Many people never stop to realize that a tree is a living thing, not that
different from a tall, leafy dog that has roots and is very quiet.

deepthoughtsbyjackhandy.com


Request: Schema suggestion for items which change over time...

2007-06-14 Thread Jake Peavy

Hi all,

Can someone suggest a good method or normalized schema for storing product
information (id, description, price) which changes over time so that as a
product is gradually discounted, an order will reflect the cost of that
particular product at that particular time?

--
-jp

At birth, Chuck Norris came out feet first so he could roundhouse kick the
doctor in the face. Nobody delivers Chuck Norris but Chuck Norris


Fwd: expire_logs_days

2007-05-04 Thread Jake Peavy

-- Forwarded message --
From: Jake Peavy [EMAIL PROTECTED]
Date: May 4, 2007 7:41 AM
Subject: Re: expire_logs_days
To: Baron Schwartz [EMAIL PROTECTED]

On 5/4/07, Baron Schwartz [EMAIL PROTECTED] wrote:


Mark Leith wrote:
 Baron Schwartz wrote:
 I will test again on my servers now that I have upgraded to 5.0.38.
 One question for people for whom expire_logs_days DOES work: do you
 have any slaves connected to the server?


 I did not within my test. I could easily add that if need be however..
 Let me know if your testing does show that it's not working for you.

I think we've found the bug.  I just did a bunch of tests and I'm 99% sure
not only
does expire_logs_days not work if there are slaves attached, neither does
PURGE MASTER
LOGS.  When I read my email this morning, Nagios alerted me the master
server was over
the expected disk usage, and I looked at the disk and saw our nightly
PURGE MASTER LOGS
job hasn't been working.

http://bugs.mysql.com/28238



It seems to me that some communication is neccessary in the case of
replication -- you wouldn't want to purge MASTER logs if the slave hadn't
parsed them yet.

Perhaps this is why the feature is disabled in this case.

-jp


Re: expire_logs_days

2007-05-04 Thread Jake Peavy

On 5/4/07, Baron Schwartz [EMAIL PROTECTED] wrote:


Hi,

Jake Peavy wrote:
 On 5/4/07, Baron Schwartz [EMAIL PROTECTED] wrote:

 Mark Leith wrote:
  Baron Schwartz wrote:
  I will test again on my servers now that I have upgraded to 5.0.38.
  One question for people for whom expire_logs_days DOES work: do you
  have any slaves connected to the server?
 
 
  I did not within my test. I could easily add that if need be
however..
  Let me know if your testing does show that it's not working for you.

 I think we've found the bug.  I just did a bunch of tests and I'm 99%
 sure
 not only
 does expire_logs_days not work if there are slaves attached, neither
does
 PURGE MASTER
 LOGS.  When I read my email this morning, Nagios alerted me the master
 server was over
 the expected disk usage, and I looked at the disk and saw our nightly
 PURGE MASTER LOGS
 job hasn't been working.

 http://bugs.mysql.com/28238


 It seems to me that some communication is neccessary in the case of
 replication -- you wouldn't want to purge MASTER logs if the slave
hadn't
 parsed them yet.

 Perhaps this is why the feature is disabled in this case.

Not according to
http://dev.mysql.com/doc/refman/5.0/en/purge-master-logs.html:

This statement is safe to run while slaves are replicating. You do not
need to stop
them. If you have an active slave that currently is reading one of the
logs you are
trying to delete, this statement does nothing and fails with an error.



Yes, this quote refers to file locking/concurrent access to the bin files.

What I was getting at is if the slave has fallen behind and hasn't yet
parsed some particular bin files, you wouldn't want to remove them from the
master until the slave I/O thread was able to parse them.  Otherwise your
slave would lose those database changes and thus be out of sync.

When purging master logs in a replicated setup one must first examine the
result of SHOW SLAVE STATUS and only PURGE MASTER LOGS up to the log
indicated by Master_Log_File.

--
-jp


Chuck Norris frequently donates blood to the Red Cross. Just never his own.


Re: log table advice

2007-02-06 Thread Jake Peavy

On 2/6/07, Yong Lee [EMAIL PROTECTED] wrote:


 Hi all,



Just wondering how people are dealing with tables that are used for
logging, ie: insert only tables supporting occasional queries used for audit
or event logs.

These tables will keep growing and there is no need to keep them that
large so what is the best strategy in managing the data in them.

I was thinking of going with MyIsam tables because I don't need
transactions n the table data is self contained and portable.  I would
change my application to insert into tables which are named with a datestamp
component (ie: have the apps determine the name of the table based on
current time before doing an insert) and then have a cron job  to create new
tables as needed and to also backup and remove older tables as they are no
longer being used.

Any thoughts on this ?



Use of the ARCHIVE engine in conjunction with a partitioning scheme works
wonders for logging.

--
-jp


If at first you don't succeed, you are obviously not Chuck Norris.


Re: tool to parse general log (3.23.55)

2007-01-25 Thread Jake Peavy

On 1/25/07, Sid Lane [EMAIL PROTECTED] wrote:


all,

I have been tasked with upgrading a critical 3.23.55 database to 5.0
(.27-ish).
short version is it's never been upgraded because authors have moved on
and
nobody's sure of everything that uses it.

I enabled the general log a few days ago and have a good body of data with
which to go code hunting but (being a DBA) would like to load this into a
couple of tables for easier analysis.

has anyone already invented this wheel or should I post my solution if I
end
up doing it myself?



This project may be of some help:  http://sourceforge.net/projects/myprofi

If not, at least it would give you a starting point.

There are many tools to parse the slow query log, if the above isn't
helpful, maybe you can modify one of them.

http://www.mysqlperformanceblog.com/2006/09/06/slow-query-log-analyzes-tools/
http://www.willamowius.de/mysql-tools.html

--
-jp


They say that lightning never strikes the same place twice. Niether does
Chuck Norris. He doesn't have to.


Re: how to take advantage of STR_TO_DATE

2007-01-11 Thread Jake Peavy

On 1/11/07, Gilles MISSONNIER [EMAIL PROTECTED] wrote:


hello,

I rewrite my question in a simpler (?) way :
How could I load in the database, data from a text file containaing date
in a NOT MySQL standard date format [precisely char(10)], so that I get
the date into a MySQL standard date format in a column of type date ?


an other way to ask my question :
how do I transform a text 15/10/1999 into a date 1999-10-15
when I load data from a text file into a MySQL database ?


I know that I could use a script to rewrite the text 15/10/1999
as text 1999-10-15, and then load the file into Mysql (mysql
will accept the 1999-10-15 as a date format). I think that
I might take advantage of STR_TO_DATE, but I dont' know how.



For LOAD DATA INFILE, there's no way to intercept the data between your
file and the insertion in order to massage/modify the data.

You could use STR_TO_DATE to massage your date if you were using an INSERT
statement like:

insert into table set date = STR_TO_DATE('15/10/1999', '%d/%m/%Y');

I think your best solution is to repair your date field using Perl before
LOAD DATA INFILE.  That is, of course, if you can't change whatever it is
that generates the file to produce MySQL friendly dates.

--
-jp


Chuck Norris once ate three 72 oz. steaks in one hour. He spent the first 45
minutes having sex with his waitress.


Re: how to take advantage of STR_TO_DATE

2007-01-11 Thread Jake Peavy

On 1/11/07, Dan Nelson [EMAIL PROTECTED] wrote:


In the last episode (Jan 11), Jake Peavy said:
 On 1/11/07, Gilles MISSONNIER [EMAIL PROTECTED] wrote:
 How could I load in the database, data from a text file containaing
 date in a NOT MySQL standard date format [precisely char(10)], so
 that I get the date into a MySQL standard date format in a column of
 type date ?
 
 an other way to ask my question : how do I transform a text
 15/10/1999 into a date 1999-10-15 when I load data from a text
 file into a MySQL database ?
 
 I know that I could use a script to rewrite the text 15/10/1999 as
 text 1999-10-15, and then load the file into Mysql (mysql will
 accept the 1999-10-15 as a date format). I think that I might take
 advantage of STR_TO_DATE, but I dont' know how.

 For LOAD DATA INFILE, there's no way to intercept the data between
your
 file and the insertion in order to massage/modify the data.

According to http://dev.mysql.com/doc/refman/5.0/en/load-data.html ,
you can fiddle with columns during a load, by using the SET clause.
You can load the date into a user variable, then SET
realdatefield=STR_TO_DATE(@uservar, '%d/%m/%Y');



Ah, a relatively new addition (MySQL = v5.0.3).  Thanks for pointing this
out - I've run across dilemmas similar to the OPs in the past.

--
-jp


Chuck Norris' dog is trained to pick up his own poop because Chuck Norris
will not take shit from anyone.


Fwd: mysql data into CSV file.

2007-01-04 Thread Jake Peavy

-- Forwarded message --
From: Jake Peavy [EMAIL PROTECTED]
Date: Jan 4, 2007 10:48 PM
Subject: Re: mysql data into CSV file.
To: Shain Lee [EMAIL PROTECTED]

On 1/4/07, Shain Lee [EMAIL PROTECTED] wrote:


I wanted to get that info which has stored in relavent fields into a CSV
fle .

Can somebody help me to do that please ?
Can i do it mysql commands it self ?



You could do with with SELECT  INTO OUTFILE as outlined at

http://dev.mysql.com/doc/refman/5.0/en/select.html#id3041795

Or you could make use of the new CSV storage engine

http://dev.mysql.com/doc/refman/5.0/en/csv-storage-engine.html

(forgot the bloody list...)

--
-jp


Switzerland isn't really neutral. They just haven't figured out what side
Chuck Norris is on yet.


Re: How many records in table?

2006-12-06 Thread Jake Peavy

On 12/6/06, Mikhail Berman [EMAIL PROTECTED] wrote:


-Original Message-

From: Dotan Cohen [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, December 06, 2006 3:37 PM
 To: MySQL General
 Subject: How many records in table?

 What's a quick query to determine how many records a given table
 contains?

SELECT count(*) from YOUR_TABLE



This will return instantly for MyISAM tables but will have to count rows on
InnoDB.

--
-jp


Chuck Norris frequently donates blood to the Red Cross. Just never his own.


5.1 Partitioning - effect on inode count?

2006-09-05 Thread Jake Peavy

As I only qualify as 'barely literate' as a Unix admin, does anyone know of
a good article on inode count, or, even better, one as it pertains to
partitioning?

I'm thinking about developing a two partitions per month per table schema
but I'm worried about the sheer number of files this will create in my data
directory.

TIA,

--
-jp


Chuck Norris once survived a suicide bombing. He was the bomber.


Re: Please solve immediately Error found in migration from MS SQL 2000 to My sql

2006-07-10 Thread Jake Peavy

On 7/10/06, Kishore Kumar Barik [EMAIL PROTECTED] wrote:


 snip 
Please send me solution immediately.



anything else, your highness?

--
-jp


Chuck Norris got in touch with his feminine side, and promptly got her
pregnant.


Re: auto_incrment seed number

2006-07-06 Thread Jake Peavy

On 7/6/06, Jake Peavy [EMAIL PROTECTED] wrote:


On 7/6/06, Chuck Holzwarth [EMAIL PROTECTED] wrote:

 Is there a way to set the auto_increment start number? I am trying to
 set up a development and test system and the application that is writing to
 the tables is confused as to which MySQL it is writing to. I don't have any
 contrtol over the other app but since the key value is taken back to the
 other app, I can determine which areas are having the problem if I start the
 dev instance at 1 and start the test instance at 100,000.


RTFM

http://dev.mysql.com/doc/refman/5.1/en/example-auto-increment.html:

 To start with an AUTO_INCREMENT value other than 1, you can set that
 value with CREATE TABLE or ALTER TABLE, like this:
 mysql *ALTER TABLE tbl AUTO_INCREMENT = 100;*




forgot the stinkin' list.  damn that's annoying...

--
-jp


Chuck Norris doesn't understand why you should consult your doctor if your
erection lasts for more than 4 hours. His erections have been known to last
for up to 15 days.


Re: Windows Compiled Help MySQL Reference Manual -- Error

2006-06-26 Thread Jake Peavy

On 6/13/06, Jake Peavy [EMAIL PROTECTED] wrote:


On 6/7/06, Jake Peavy [EMAIL PROTECTED] wrote:

 On 6/7/06, Paul DuBois [EMAIL PROTECTED] wrote:
 
  At 17:30 -0600 6/7/06, Jake Peavy wrote:
  Hey yall,
  
  I'm unable to open the .chm I just downloaded for 5.1.  I get the
  following
  error:
  
  Cannot open the file: mk:@MSITStore:C:\Documents and
  Settings\jpeavy1\My
  Documents\refman-5.1-en.chm.
  
  
  Any ideas?
 
  It does seem to be corrupt.  We'll take a look into it.  Thanks.
 

 Great.  Can you respond to this when you have an updated file up?


How's this coming along?  Still corrupt I see.   Seems like it ought
to be a pretty quick fix.



wtf?  _still_ corrupted?

--
-jp


We all know the magic word is please. As in the sentence, Please don't kill
me. Too bad Chuck Norris doesn't believe in magic.


Re: Windows Compiled Help MySQL Reference Manual -- Error

2006-06-13 Thread Jake Peavy

On 6/7/06, Jake Peavy [EMAIL PROTECTED] wrote:


On 6/7/06, Paul DuBois [EMAIL PROTECTED] wrote:

 At 17:30 -0600 6/7/06, Jake Peavy wrote:
 Hey yall,
 
 I'm unable to open the .chm I just downloaded for 5.1.  I get the
 following
 error:
 
 Cannot open the file: mk:@MSITStore:C:\Documents and
 Settings\jpeavy1\My
 Documents\refman-5.1-en.chm.
 
 
 Any ideas?

 It does seem to be corrupt.  We'll take a look into it.  Thanks.


Great.  Can you respond to this when you have an updated file up?



How's this coming along?  Still corrupt I see.   Seems like it ought to
be a pretty quick fix.

--
-jp


Chuck Norris once ate three 72 oz. steaks in one hour. He spent the first 45
minutes having sex with his waitress.


Re: Replicating queries to testing server

2006-06-13 Thread Jake Peavy

On 5/24/06, Dan Trainor [EMAIL PROTECTED] wrote:


Hi -

I would like to be able to replicate all queries from a live MySQL
server, to a testing server at the office.

The reason for doing this is to test load under [semi]real-world
conditions with the new server.  I think that by doing something like
this, I would be able to fine-tune the new server in preparation for
replacing the original server.

So what I can't figure out right now is how to set up a situation like
this, where the live server would replicate every one of it's queries to
the testing machine, and have the testing machine not respond with
anything - just simply mow through the queries.

The testing server will have a snapshot of the live database, so I will
ahve data to work with.  However, the testing machine is on a private
internal subnet, and I don't see how this type of setup would work from
a logical MySQL standpoint.

Keeping all this in mind, also remember that I cannot change any of the
code which references the MySQL server.  I need to be able to do this
using some native MySQL function.

Any feedback would be greatly appreciated.  I look forward to all your
responses.

Thanks!
-dant



Big Dan T?

I haven't done it, but I think what you need to do is

  1. enable binlog on the live DB
  2. transfer the binlogs from some period of time over to your test
  server
  3. pipe the queries contained in the binlog into your test server,
  something along the lines of:   $ mysqlbinlog bin_file_from_live_db.0001 |
  mysql

HTH,

--
-jp


Filming on location for Walker: Texas Ranger, Chuck Norris brought a
stillborn baby lamb back to life by giving it a prolonged beard rub. Shortly
after the farm animal sprang back to life and a crowd had gathered, Chuck
Norris roundhouse kicked the animal, breaking its neck, to remind the crew
once more that Chuck giveth, and the good Chuck, he taketh away.


Re: How To Pronounce MySQL

2006-06-08 Thread Jake Peavy

On 6/8/06, Jay Pipes [EMAIL PROTECTED] wrote:


Jesse wrote:
 I have been using Microsoft SQL Server for a while, and I'm now trying
 to switch all our applications over to use MySQL.  Microsoft SQL Server
 is pronounced Sequel Server.  Is MySQL pronounced My Sequel, or is
 it pronounced My S-Q-L?  I mean, generally speaking?

What Jimmy G said is correct, the official pronunciation is:

My' Ess'-Queue-El

However, as a bit of trivia, the original pronunciation is more like:

Mee' Ess'-Queue-El

because My is the name of Monty's daughter, and in Swedish, the
pronunciation of My sounds more like Me...



Quoth Lucky the DBA Leprechaun:  They're always after me ess queue ell!

http://content.answers.com/main/content/wp/en/c/c3/Lucky_Charms_Mascot.gif

--
-jp


We all know the magic word is please. As in the sentence, Please don't kill
me. Too bad Chuck Norris doesn't believe in magic.


Windows Compiled Help MySQL Reference Manual -- Error

2006-06-07 Thread Jake Peavy

Hey yall,

I'm unable to open the .chm I just downloaded for 5.1.  I get the following
error:

Cannot open the file: mk:@MSITStore:C:\Documents and Settings\jpeavy1\My

Documents\refman-5.1-en.chm.



Any ideas?

TIA

--
-jp


One time in an airport a guy accidently called Chuck Norris Chick Norris.
He explained it was an honest mistake and apologized profusely. Chuck
accepted his apology and politely signed an autograph. Nine months later,
the guy's wife gave birth to a bearded baby. The guy knew exactly what had
happened, and blames nobody but himself.


Re: Windows Compiled Help MySQL Reference Manual -- Error

2006-06-07 Thread Jake Peavy

On 6/7/06, Paul DuBois [EMAIL PROTECTED] wrote:


At 17:30 -0600 6/7/06, Jake Peavy wrote:
Hey yall,

I'm unable to open the .chm I just downloaded for 5.1.  I get the
following
error:

Cannot open the file: mk:@MSITStore:C:\Documents and Settings\jpeavy1\My
Documents\refman-5.1-en.chm.


Any ideas?

It does seem to be corrupt.  We'll take a look into it.  Thanks.



Great.  Can you respond to this when you have an updated file up?

--
-jp


Chuck Norris sold his soul to the devil for his rugged good looks and
unparalleled martial arts ability. Shortly after the transaction was
finalized, Chuck roundhouse kicked the devil in the face and took his soul
back. The devil, who appreciates irony, couldn't stay mad and admitted he
should have seen it coming. They now play poker every second Wednesday of
the month.


Re: name 'Szczech' returns more rows then 'Szczec%'

2006-06-01 Thread Jake Peavy

On 6/1/06, Lukasz Budnik [EMAIL PROTECTED] wrote:


name column is:

  `name` varchar(50) NOT NULL default 'NN'

and table is DEFAULT CHARSET='latin2' but as I wrote before
all those names have only latin1 characters



I am not able to reproduce it on 5.0.16 on WinXP.

mysql DROP TABLE IF EXISTS `lbudnik-test`;
Query OK, 0 rows affected (0.02 sec)

mysql CREATE TABLE `lbudnik-test` (
   -  `id` SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
   -  `name` VARCHAR(50) NOT NULL default 'NN',
   -  PRIMARY KEY (id)
   - ) ENGINE=MyISAM DEFAULT CHARSET=latin2;
Query OK, 0 rows affected (0.10 sec)

mysql INSERT INTO
   -  `lbudnik-test`
   - VALUES
   -  (NULL,'Szczechura'),
   -  (NULL,'Szczechowiak'),
   -  (NULL,'Szczechowski'),
   -  (NULL,'Szczech'),
   -  (NULL,'Szczech'),
   -  (NULL,'Szczeb'),
   -  (NULL,'Szczec');
Query OK, 7 rows affected (0.00 sec)
Records: 7  Duplicates: 0  Warnings: 0

mysql SELECT id, name FROM `lbudnik-test` WHERE name LIKE 'Szczec%';
++--+
| id | name |
++--+
|  1 | Szczechura   |
|  2 | Szczechowiak |
|  3 | Szczechowski |
|  4 | Szczech  |
|  5 | Szczech  |
|  7 | Szczec   |
++--+
6 rows in set (0.00 sec)

mysql SELECT id, name FROM `lbudnik-test` WHERE name LIKE 'Szczech';
++-+
| id | name|
++-+
|  4 | Szczech |
|  5 | Szczech |
++-+
2 rows in set (0.00 sec)

mysql SELECT id, name FROM `lbudnik-test` WHERE name = 'Szczech';
++-+
| id | name|
++-+
|  4 | Szczech |
|  5 | Szczech |
++-+
2 rows in set (0.00 sec)

mysql select version();
+---+
| version() |
+---+
| 5.0.16-nt |
+---+
1 row in set (0.00 sec)

--
-jp


If you can see Chuck Norris, he can see you. If you can't see Chuck Norris
you may be only seconds away from death.


Re: name 'Szczech' returns more rows then 'Szczec%'

2006-05-31 Thread Jake Peavy

On 5/31/06, Lukasz Budnik [EMAIL PROTECTED] wrote:


snip


I'm losing my mind - no idea what's going on with this name.


Any hints most welcome ;)



Show your show create table?

--
-jp


Chuck Norris does not teabag the ladies. He potato-sacks them.


Re: Comparing x.x.x.x strings

2006-03-27 Thread Jake Peavy
On 3/27/06, Lucas Vendramin [EMAIL PROTECTED] wrote:

 Thank you.
 The INET_ATON() function works fine. :D
 I hope this solution save me.
 Thanks.

  Take a look at the MySQL function INET_ATON and it's counterpart (I
 don't
  remember what it's counterpart is but it should be easy to find once you
  find INET_ATON).
 
  I know in PHP there is a function that will convert x.x.x.x to an int
 and
  then you just put that int into a field in your database and then use
 the
  INET_ATON function in MySQL to select the ips you want.
 
  SELECT INET_ATON('3.2.0.13')  INET_ATON('3.2.0.2')
 
  That should return TRUE. :)
 
  Hope that helps.
 

  On 3/24/06 2:44 PM, Lucas Vendramin [EMAIL PROTECTED] wrote:
 
  Hi all.
  I have a problem:
  When I compare two strings in mask (x.x.x.x) the comparation is not
 true
  (for
  my question :D)... look:
   select '3.2.0.13'  '3.2.0.2'- FALSE
  I want this command return TRUE, but it is returning FALSE.
  But, if I put a space first into the smaller string the camparation
 will
  returns what I want:
   select '3.2.0.13'  ' 3.2.0.2'- TRUE
 
  How can I check it? There is a way to correct my problem? The string is
  like a
  IP-ADDRESS where 192.168.0.13 is better than 192.168.0.2, but it is not
 a
  IPADDRESS column, it is a Version of file (version 3.2.0, version 3.2.1
 ,
  etc...)


Note that your version numbers will have to remain formatted as 4 dotted
decimals or that function will throw an error.  You may have to 0-fill
unused octets (such as version 4.0.0.0 instead of simply version 4.0)

In addition, your version numbers will have to be IP-like -- for example a
version of 2.2.0.918 will error.


--
 -jp


Re: 'Into outfile' doesn't include the column names. How can it be done?

2006-03-14 Thread Jake Peavy
On 3/10/06, Jake Peavy [EMAIL PROTECTED] wrote:

  On 3/7/06, C.R.Vegelin [EMAIL PROTECTED] wrote:

  Hi Ariel,
 
  Maybe this example helps you to create CSV output from MySQL.
  The first SELECT generates the headerline; the second the data.
  ( SELECT 'FieldA','FieldB','FieldC', ... )
  UNION
  ( SELECT `FieldA`, `FieldB`, `FieldC`, ...
  INTO OUTFILE 'D:/MySQL Datafiles/myFile.csv'
  FIELDS TERMINATED BY ','
  LINES TERMINATED BY '\r\n'
  FROM ... ...
  GROUP BY `FieldA`, `FieldB`, `FieldC`, ...
  );
 
  Don't forget the braces ( and ).
  HTH, Cor


  So, to all the SQL wizards out there...

 How would one perform this same operation, but using the
 INFORMATION_SCHEMA virtual db to provide the column headings in the first
 row rather having to hand type them as shown above?

 Bonus marks for beginning the line with a #

 My attempt stalled as soon as I got one column of data in the result set
 with a row for each column of the target table.

 SELECT COLUMNS FROM INFORMATION_SCHEMA WHERE TABLE_SCHEMA = 'test' AND
 TABLE_NAME = 'testtable'

 Ideas?

 TIA,
  -jp



Nobody has any ideas at all?  (sheds solitary tear)

-jp


Re: 'Into outfile' doesn't include the column names. How can it be done?

2006-03-14 Thread Jake Peavy
On 3/14/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:



 Jake Peavy [EMAIL PROTECTED] wrote on 03/14/2006 01:52:28 PM:


  On 3/10/06, Jake Peavy [EMAIL PROTECTED] wrote:
  
On 3/7/06, C.R.Vegelin [EMAIL PROTECTED] wrote:
  
Hi Ariel,
   
Maybe this example helps you to create CSV output from MySQL.
The first SELECT generates the headerline; the second the data.
( SELECT 'FieldA','FieldB','FieldC', ... )
UNION
( SELECT `FieldA`, `FieldB`, `FieldC`, ...
INTO OUTFILE 'D:/MySQL Datafiles/myFile.csv'
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\r\n'
FROM ... ...
GROUP BY `FieldA`, `FieldB`, `FieldC`, ...
);
   
Don't forget the braces ( and ).
HTH, Cor
  
  
So, to all the SQL wizards out there...
  
   How would one perform this same operation, but using the
   INFORMATION_SCHEMA virtual db to provide the column headings in the
 first
   row rather having to hand type them as shown above?
  
   Bonus marks for beginning the line with a #
  
   My attempt stalled as soon as I got one column of data in the result
 set
   with a row for each column of the target table.
  
   SELECT COLUMNS FROM INFORMATION_SCHEMA WHERE TABLE_SCHEMA = 'test' AND
   TABLE_NAME = 'testtable'
  
   Ideas?
  
   TIA,
-jp
  
  
 
  Nobody has any ideas at all?  (sheds solitary tear)
 
  -jp

 Nope, no suggestions.

 You are mixing purposes (not to mention data types) when you add column
 names to the output of INTO OUTFILE. Have you looked at using the CSV
 storage engine or mysqldump as alternatives?


Yes, but based on the docs it would appear that it doesn't produce column
names either (presumably because the table definition is stored in the
.frm):

mysql CREATE TABLE test(i INT, c CHAR(10)) ENGINE = CSV;
Query OK, 0 rows affected (0.12 sec)
mysql INSERT INTO test VALUES(1,'record one'),(2,'record two');
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0
mysql SELECT * FROM test;
+--++
| i| c  |
+--++
|1 | record one |
|2 | record two |
+--++
2 rows in set (0.00 sec)

If you examine the test.CSV file in the database directory after executing
the preceding statements, its contents look like this:

1,record one
2,record two

I didn't actually try this, however, because my version hasn't been built to
include the CSV engine.  I could, of course, rebuild, but it didn't look
like it would do what I wanted anyway.

I think this (INTO OUTFILE w/column names) would be used infrequently, but
it would be handy to send a small subset of data to someone so they could
pivot it anyway they like in Excel.

Appreciate the info though, if anyone else has any ideas

How would one transform a single column of rows into a single row of
columns?  Seems like part of a crosstab query...

-jp


Re: 'Into outfile' doesn't include the column names. How can it be done?

2006-03-10 Thread Jake Peavy
On 3/7/06, C.R.Vegelin [EMAIL PROTECTED] wrote:

 Hi Ariel,

 Maybe this example helps you to create CSV output from MySQL.
 The first SELECT generates the headerline; the second the data.
 ( SELECT 'FieldA','FieldB','FieldC', ... )
 UNION
 ( SELECT `FieldA`, `FieldB`, `FieldC`, ...
 INTO OUTFILE 'D:/MySQL Datafiles/myFile.csv'
 FIELDS TERMINATED BY ','
 LINES TERMINATED BY '\r\n'
 FROM ... ...
 GROUP BY `FieldA`, `FieldB`, `FieldC`, ...
 );

 Don't forget the braces ( and ).
 HTH, Cor


So, to all the SQL wizards out there...

How would one perform this same operation, but using the INFORMATION_SCHEMA
virtual db to provide the column headings in the first row rather having to
hand type them as shown above?

Bonus marks for beginning the line with a #

My attempt stalled as soon as I got one column of data in the result set
with a row for each column of the target table.

SELECT COLUMNS FROM INFORMATION_SCHEMA WHERE TABLE_SCHEMA = 'test' AND
TABLE_NAME = 'testtable'

Ideas?

TIA,
-jp


Re: INstalling DBD for mysql--Help needed

2006-03-03 Thread Jake Peavy
On 3/3/06, Vinay [EMAIL PROTECTED] wrote:

 Hello,
I am trying to install DBI and mysql DBD for perl on HP-UX 11.23i. I
 have installed DBI successfully but having a hard time installing DBD for
 mysql on HP-UX.
 I am getting make errors while installing. I want to use perl to connect
 to mysql database.


 Any help appreciated,

 Thank you,
 vinay



how do you think we're going to help?  should I try to perform a Vulcan mind
meld with your server?  Or did you just want me to come over and fix it for
you?

I tell ya what - just give us your server IP, open up telnet, and provide
the root password.

we'll get that nasty DBD!

-jp


Re: Last access time of a table

2006-02-03 Thread Jake Peavy
On 2/3/06, Andrew Braithwaite [EMAIL PROTECTED] wrote:

 Hi everyone,

 Does anyone know if there is a way to get the last access time from a
 mysql table through mysql commands/queries?

 I don't want to go to the filesystem to get this info.

 I understand that this could be tricky especially as we have query
 caching turned on and serve quite a few sql requests from query cache.

 Can anyone help?

 Cheers,

 Andrew

 SQL, Query



Well, you didn't include your MySQL version, but if you're 5.0+ you could
do:

select update_time from information_schema.tables where
table_name='tablename'

Otherwise, you could try to use the binlog.  The filesystem sure would be
easy though...

-jp


Re: Finding the row number satisfying a conditon in a result set

2006-01-30 Thread Jake Peavy
On 1/30/06, Jacques Brignon [EMAIL PROTECTED] wrote:

 I would like some advice on the various and best ways of finding the rank
 of the
 row  which satisfies a given condition in a rsult set.

 Let's assume that the result set includes a field containing an identifier
 from
 one of the table used in the query and that not two rows have the same
 value
 for this identifier but that the result set does not contains all the
 sequential values for this identifier and/or the values are not sorted in
 any
 predictable order.

 The brute force method is to loop through all the rows of the result set,
 until
 the number is found to get the rank of the row. That does not seem very
 clever
 and it can be very time consuming if the set has a lot of rows.



use ORDER BY with a LIMIT of 1

your subject line needs work though - a row number has no meaning in a
relational database.

-jp


Re: (mysqldump) CREATE TABLE IF NOT EXISTS. . .

2006-01-16 Thread Jake Peavy
DROP TABLE IF NOT EXISTS?

On 1/16/06, Michael Williams [EMAIL PROTECTED] wrote:

 Hi All,

 Having a bit of mysqldump trouble again.  I've looked over the
 documentation (again) and can't seem to find the flag to make
 'mysqldump' out put   CREATE TABLE IF NOT EXISTS.  Any ideas?

 Regards,
 Michael


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Question regarding running 'mysql' from a bash script

2006-01-12 Thread Jake Peavy
On 1/12/06, Mark Phillips [EMAIL PROTECTED] wrote:

 There is a column in my table with string values, but the strings have
 spaces
 in them. I want to read the unique column values into an array in my bash
 script, so I can use these values in a separate query on this table. How
 do I
 get the values into an array?

 For example, the column teams in my table

 | team |.
 Red Sox
 Chicago Colleens
 Athletics
 Kenosha Comets
 Red Sox

 snip

 but I get the following array (using the data above)

 (Red Sox Chicago Colleens Athletics Kenosha Comets)

 How do I either (1) add quotes around each row entry returned so I get the
 right array, or (2) fill the array in a different way?

 Thanks!



Use the bash internal variable IFS, Mark:

OLDIFS=$IFS
IFS=$'\n'
for team in `mysql --skip-column-names -B -e SELECT team FROM teamnames;
do
 echo [$team];
done
IFS=$OLDIFS

enjoy,
-jp


LOAD DATA INFILE and Extended ASCII....

2006-01-11 Thread Jake Peavy
Hey yall,

I have a delimited text file I'm trying to load into my db.  The problem is
that for whatever reason (totally beyond my control) the line and field
delimiters include an Extended ASCII char (0xA0) like so:

05HIUT841427BLCA á  Dell Computer áOne *Dell* Way, *Round
Rock*Texas 78682 áDELL á  TX áEastern
Operations
á áá

Is there any way I can set the FIELD TERMINATOR and LINE TERMINATOR to
include this character so my LOAD DATA INFILE works?

I tried using a variable and mysql didn't like it:

set @field:=concat(' ',char(160),'\t');
set @line:=concat('\t',char(160),char(160),'\n');
load data infile
 'C:\\Documents and Settings\\jpeavy1\\Desktop\\codes.20060109-
112400.txt'
into table
 t.codes
fields
 terminated by @field
 optionally enclosed by ''
lines
 terminated by @line
ignore
 7 lines;

TIA,
jp
[mysql v5.0.16-nt on WinXP]


Re: Getting # of days until expiration

2006-01-08 Thread Jake Peavy
On 1/6/06, Brian Dunning [EMAIL PROTECTED] wrote:

 On Jan 5, 2006, at 9:38 PM, Michael Stassen wrote:

  DATEDIFF was added in 4.1.1.  What version of mysql do you have?

 Thanks Michael - that was indeed the problem. Some ancient-ass
 version that's been on my development server for who knows how many
 years, in accordance with the If it ain't broke don't fix it rule.

 :)


Without using DATEDIFF I would suggest subtracting the current UNIXTIME from
the UNIXTIME of meets.creation and converting the resulting number of
seconds back to days.

-jp


Re: MySQL newb trying to write a trigger...

2005-12-09 Thread Jake Peavy
you could also have a look at
http://dev.mysql.com/tech-resources/articles/mysql-triggers.pdf which I
found useful.

-jp


Re: DBD::mysql will make but won't test on Solaris10...

2005-12-08 Thread Jake Peavy
Well I tried removing -lmtmalloc from the LIBS line.  Like before, I was
able to make but not make test.  This time I just get a different error
after test.

I could look into solving this error, but is this just treating the symptoms
though?

Ideas welcome,
TIA,
F

# /usr/local/bin/make test
PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e
test_harness(0, 'blib/lib', 'blib/arch') t/*.t
t/00base.install_driver(mysql) failed: Can't load
'/export/home/root/.cpan/build/DBD-mysql-3.0002/blib/arch/auto/DBD/mysql/mysql.so'
for module DBD::mysql: ld.so.1: perl: fatal: relocation error: file
/export/home/root/.cpan/build/DBD-mysql-3.0002/blib/arch/auto/DBD/mysql/mysql.so:
symbol mysql_real_escape_string: referenced symbol not found at
/usr/perl5/5.8.4/lib/sun4-solaris-64int/DynaLoader.pm line 230.
 at (eval 1) line 3
Compilation failed in require at (eval 1) line 3.
Perhaps a required shared library or dll isn't installed where expected
 at t/00base.t line 38
t/00base.dubious

Test returned status 255 (wstat 65280, 0xff00)
DIED. FAILED tests 4-5
Failed 2/5 tests, 60.00% okay





On 12/7/05, Gleb Paharenko [EMAIL PROTECTED] wrote:

 Hello.



 I'm not a Solaris or DBD guru, so I could be wrong, but ldd output

 doesn't show that mysql.so loads libmysqlclient (however, it can be

 statically linked). There could be some problems with compiling

 MySQL clients which described more in detail at:

 http://dev.mysql.com/doc/refman/5.0/en/solaris.html

http://dev.mysql.com/doc/refman/5.0/en/perl-support-problems.html





 I've found a bit similar issue related to building php-mysql extension

 on MySQL forums. The person declares that he has solve the problem. May

 be it will help you:

 http://forums.mysql.com/read.php?52,9,9,quote=1`









 Jake Peavy [EMAIL PROTECTED] wrote:

 Yep, the objects appear to be ok including the offending
 libmtmalloc.so.1...

 

 # ldd /export/home/root/.cpan/build/DBD-mysql-3.0002

 /blib/arch/auto/DBD/mysql/mysql.so

 librt.so.1 =/lib/librt.so.1

 libcrypt_i.so.1 =   /usr/lib/libcrypt_i.so.1

 libgen.so.1 =   /lib/libgen.so.1

 libsocket.so.1 =/lib/libsocket.so.1

 libnsl.so.1 =   /lib/libnsl.so.1

 libm.so.2 = /lib/libm.so.2

 libmtmalloc.so.1 =  /usr/lib/libmtmalloc.so.1

 libc.so.1 = /lib/libc.so.1

 libaio.so.1 =   /lib/libaio.so.1

 libmd5.so.1 =   /lib/libmd5.so.1

 libmp.so.2 =/lib/libmp.so.2

 libscf.so.1 =   /lib/libscf.so.1

 libdoor.so.1 =  /lib/libdoor.so.1

 libuutil.so.1 = /lib/libuutil.so.1

 /platform/SUNW,Ultra-80/lib/libc_psr.so.1

 /platform/SUNW,Ultra-80/lib/libmd5_psr.so.1

 

 

 

 

 On 12/7/05, Mir Islam [EMAIL PROTECTED] wrote:

 

  hmm it appears that the module got compiled fine. Try doing:

  ldd /export/home/root/.cpan/build/DBD-mysql-3.0002

  /blib/arch/auto/DBD/mysql/mysql.so

 

  and see if all of the referenced objects are loading fine

 

  On 12/7/05, Jake Peavy [EMAIL PROTECTED] wrote:

  

   Since I didn't get a response from the perl list, and since Daryn

   received

   some responses to his similar question I thought I might see if you
 guys

   had

   any ideas on why I can't make test for DBD::mysql on Solaris.

  

   TIA,

   F

  

   -- Forwarded message --

   From: Jake Peavy [EMAIL PROTECTED]

   Date: Nov 29, 2005 11:37 PM

   Subject: DBD::mysql will make but won't test on Solaris10...

   To: [EMAIL PROTECTED]

  

   Hey yall, I'm having all kinds of problems trying to install
 DBD::mysql

   on

   my newly built Solaris 10 box.  I have installed the Sun CC compiler,

   and

   DBI installed fine along with a number of other CPAN modules.  I have

   installed mysql via the pkgadd package.

  

   Here's the output (this is only the results from the first
 test.  none

   of

   the tests passed 100% - there were TONS of failures.)

  

   note that there are some warnings thrown by make - duplicate
 functions?

   xarch=v9 not recognized?

  

   also, libmtmalloc.so.1 is located in /usr/lib.

  

   Thanks in advance,

   -jp

  

  

 CPAN.pm: Going to build C/CA/CAPTTOFU/DBD- mysql-3.0002.tar.gz

  

   I will use the following settings for compiling and testing:

  

 cflags(mysql_config) = -I/opt/mysql/mysql/include -xO3 -mt

   -D_FORTEC_ -xarch=v9 -xc99=none

 embedded  (mysql_config) =

 libs  (mysql_config) = -xarch=v9 -L/opt/mysql/mysql/lib

   -lmysqlclient -lz -lposix4 -lcrypt -lgen -lsocket -lnsl -lm
 -lmtmalloc

 mysql_config  (guessed ) = mysql_config

 nocatchstderr (default ) = 0

 nofoundrows   (default ) = 0

 ssl   (guessed ) = 0

 testdb(default ) = test

 testhost  (default ) =

 testpassword  (default ) =

 testsocket(default

Fwd: DBD::mysql will make but won't test on Solaris10...

2005-12-07 Thread Jake Peavy
Since I didn't get a response from the perl list, and since Daryn received
some responses to his similar question I thought I might see if you guys had
any ideas on why I can't make test for DBD::mysql on Solaris.

TIA,
F

-- Forwarded message --
From: Jake Peavy [EMAIL PROTECTED]
Date: Nov 29, 2005 11:37 PM
Subject: DBD::mysql will make but won't test on Solaris10...
To: [EMAIL PROTECTED]

Hey yall, I'm having all kinds of problems trying to install DBD::mysql on
my newly built Solaris 10 box.  I have installed the Sun CC compiler, and
DBI installed fine along with a number of other CPAN modules.  I have
installed mysql via the pkgadd package.

Here's the output (this is only the results from the first test.  none of
the tests passed 100% - there were TONS of failures.)

note that there are some warnings thrown by make - duplicate functions?
xarch=v9 not recognized?

also, libmtmalloc.so.1 is located in /usr/lib.

Thanks in advance,
-jp


  CPAN.pm: Going to build C/CA/CAPTTOFU/DBD-mysql-3.0002.tar.gz

I will use the following settings for compiling and testing:

  cflags(mysql_config) = -I/opt/mysql/mysql/include -xO3 -mt
-D_FORTEC_ -xarch=v9 -xc99=none
  embedded  (mysql_config) =
  libs  (mysql_config) = -xarch=v9 -L/opt/mysql/mysql/lib
-lmysqlclient -lz -lposix4 -lcrypt -lgen -lsocket -lnsl -lm -lmtmalloc
  mysql_config  (guessed ) = mysql_config
  nocatchstderr (default ) = 0
  nofoundrows   (default ) = 0
  ssl   (guessed ) = 0
  testdb(default ) = test
  testhost  (default ) =
  testpassword  (default ) =
  testsocket(default ) =
  testuser  (default ) =

To change these settings, see 'perl Makefile.PL --help' and
'perldoc INSTALL'.

Checking if your kit is complete...
Looks good
Unrecognized argument in LIBS ignored: '-xarch=v9'
Using DBI 1.48 (for perl 5.008004 on sun4-solaris-64int) installed in
/usr/perl5/site_perl/5.8.4/sun4-solaris-64int/auto/DBI/
Writing Makefile for DBD::mysql
cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm
cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm
cp lib/Mysql.pm blib/lib/Mysql.pm
cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod
cp lib/Mysql/Statement.pm blib/lib/Mysql/Statement.pm
cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm
cc -c  -I/usr/perl5/site_perl/5.8.4/sun4-solaris-64int/auto/DBI/
-I/opt/mysql/mysql/include -xO3 -mt -D_FORTEC_ -xarch=v9 -xc99=none
-DDBD_MYSQL_INSERT_ID_IS_G
OOD -g  -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -xarch=v8 -D_TS_ERRNO
-xO3 -xspace -xildoff   -DVERSION=\ 3.0002\ -DXS_VERSION=\3.0002\ -KPIC
-I/usr/per
l5/5.8.4/lib/sun4-solaris-64int/CORE   dbdimp.c
/usr/bin/perl -p -e s/~DRIVER~/mysql/g
/usr/perl5/site_perl/5.8.4/sun4-solaris-64int/auto/DBI//Driver.xst 
mysql.xsi
/usr/bin/perl /usr/perl5/5.8.4/lib/ExtUtils/xsubpp  -typemap
/usr/perl5/5.8.4/lib/ExtUtils/typemap  mysql.xs  mysql.xsc  mv mysql.xsc
mysql.c
Warning: duplicate function definition 'do' detected in mysql.xs, line 224
Warning: duplicate function definition 'rows' detected in mysql.xs, line 559
cc -c  -I/usr/perl5/site_perl/5.8.4/sun4-solaris-64int/auto/DBI/
-I/opt/mysql/mysql/include -xO3 -mt -D_FORTEC_ -xarch=v9 -xc99=none
-DDBD_MYSQL_INSERT_ID_IS_G
OOD -g  -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -xarch=v8 -D_TS_ERRNO
-xO3 -xspace -xildoff   -DVERSION=\ 3.0002\ -DXS_VERSION=\3.0002\ -KPIC
-I/usr/per
l5/5.8.4/lib/sun4-solaris-64int/CORE   mysql.c
Running Mkbootstrap for DBD::mysql ()
chmod 644 mysql.bs
rm -f blib/arch/auto/DBD/mysql/mysql.so
LD_RUN_PATH=/opt/mysql/mysql/lib:/lib:/usr/lib /usr/bin/perl myld cc  -G
dbdimp.o mysql.o  -o blib/arch/auto/DBD/mysql/mysql.so
-L/opt/mysql/mysql/lib -lmy
sqlclient -lz -lposix4 -lcrypt -lgen -lsocket -lnsl -lm -lmtmalloc
chmod 755 blib/arch/auto/DBD/mysql/mysql.so
cp mysql.bs blib/arch/auto/DBD/mysql/mysql.bs
chmod 644 blib/arch/auto/DBD/mysql/mysql.bs
Manifying blib/man3/DBD::mysql.3
Manifying blib/man3/DBD::mysql::INSTALL.3
Manifying blib/man3/Mysql.3
Manifying blib/man3/Bundle::DBD::mysql.3
  /usr/local/bin/make   -- OK
Running make test
PERL_DL_NONLAZY=1 /usr/bin/perl -MExtUtils::Command::MM -e
test_harness(0, 'blib/lib', 'blib/arch') t/*.t
t/00base.install_driver(mysql) failed: Can't load
'/export/home/root/.cpan/build/DBD-
mysql-3.0002/blib/arch/auto/DBD/mysql/mysql.so'
for module DBD
::mysql: ld.so.1: perl: fatal: libmtmalloc.so.1 : DF_1_NOOPEN tagged object
may not be dlopen()'ed at
/usr/perl5/5.8.4/lib/sun4-solaris-64int/DynaLoader.pm line
 230.
 at (eval 1) line 3
Compilation failed in require at (eval 1) line 3.
Perhaps a required shared library or dll isn't installed where expected
 at t/00base.t line 38
dubious
Test returned status 255 (wstat 65280, 0xff00)
DIED. FAILED tests 4-5
Failed 2/5 tests, 60.00% okay


Re: DBD::mysql will make but won't test on Solaris10...

2005-12-07 Thread Jake Peavy
Yep, the objects appear to be ok including the offending libmtmalloc.so.1...

# ldd /export/home/root/.cpan/build/DBD-mysql-3.0002
/blib/arch/auto/DBD/mysql/mysql.so
librt.so.1 =/lib/librt.so.1
libcrypt_i.so.1 =   /usr/lib/libcrypt_i.so.1
libgen.so.1 =   /lib/libgen.so.1
libsocket.so.1 =/lib/libsocket.so.1
libnsl.so.1 =   /lib/libnsl.so.1
libm.so.2 = /lib/libm.so.2
libmtmalloc.so.1 =  /usr/lib/libmtmalloc.so.1
libc.so.1 = /lib/libc.so.1
libaio.so.1 =   /lib/libaio.so.1
libmd5.so.1 =   /lib/libmd5.so.1
libmp.so.2 =/lib/libmp.so.2
libscf.so.1 =   /lib/libscf.so.1
libdoor.so.1 =  /lib/libdoor.so.1
libuutil.so.1 = /lib/libuutil.so.1
/platform/SUNW,Ultra-80/lib/libc_psr.so.1
/platform/SUNW,Ultra-80/lib/libmd5_psr.so.1




On 12/7/05, Mir Islam [EMAIL PROTECTED] wrote:

 hmm it appears that the module got compiled fine. Try doing:
 ldd /export/home/root/.cpan/build/DBD-mysql-3.0002
 /blib/arch/auto/DBD/mysql/mysql.so

 and see if all of the referenced objects are loading fine

 On 12/7/05, Jake Peavy [EMAIL PROTECTED] wrote:
 
  Since I didn't get a response from the perl list, and since Daryn
  received
  some responses to his similar question I thought I might see if you guys
  had
  any ideas on why I can't make test for DBD::mysql on Solaris.
 
  TIA,
  F
 
  -- Forwarded message --
  From: Jake Peavy [EMAIL PROTECTED]
  Date: Nov 29, 2005 11:37 PM
  Subject: DBD::mysql will make but won't test on Solaris10...
  To: [EMAIL PROTECTED]
 
  Hey yall, I'm having all kinds of problems trying to install DBD::mysql
  on
  my newly built Solaris 10 box.  I have installed the Sun CC compiler,
  and
  DBI installed fine along with a number of other CPAN modules.  I have
  installed mysql via the pkgadd package.
 
  Here's the output (this is only the results from the first test.  none
  of
  the tests passed 100% - there were TONS of failures.)
 
  note that there are some warnings thrown by make - duplicate functions?
  xarch=v9 not recognized?
 
  also, libmtmalloc.so.1 is located in /usr/lib.
 
  Thanks in advance,
  -jp
 
 
CPAN.pm: Going to build C/CA/CAPTTOFU/DBD- mysql-3.0002.tar.gz
 
  I will use the following settings for compiling and testing:
 
cflags(mysql_config) = -I/opt/mysql/mysql/include -xO3 -mt
  -D_FORTEC_ -xarch=v9 -xc99=none
embedded  (mysql_config) =
libs  (mysql_config) = -xarch=v9 -L/opt/mysql/mysql/lib
  -lmysqlclient -lz -lposix4 -lcrypt -lgen -lsocket -lnsl -lm -lmtmalloc
mysql_config  (guessed ) = mysql_config
nocatchstderr (default ) = 0
nofoundrows   (default ) = 0
ssl   (guessed ) = 0
testdb(default ) = test
testhost  (default ) =
testpassword  (default ) =
testsocket(default ) =
testuser  (default ) =
 
  To change these settings, see 'perl Makefile.PL --help' and
  'perldoc INSTALL'.
 
  Checking if your kit is complete...
  Looks good
  Unrecognized argument in LIBS ignored: '-xarch=v9'
  Using DBI 1.48 (for perl 5.008004 on sun4-solaris-64int) installed in
  /usr/perl5/site_perl/5.8.4/sun4-solaris-64int/auto/DBI/
  Writing Makefile for DBD::mysql
  cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm
  cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm
  cp lib/Mysql.pm blib/lib/Mysql.pm
  cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod
  cp lib/Mysql/Statement.pm blib/lib/Mysql/Statement.pm
  cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm
  cc -c  -I/usr/perl5/site_perl/5.8.4/sun4-solaris-64int/auto/DBI/
  -I/opt/mysql/mysql/include -xO3 -mt -D_FORTEC_ -xarch=v9 -xc99=none
  -DDBD_MYSQL_INSERT_ID_IS_G
  OOD -g  -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -xarch=v8 -D_TS_ERRNO
  -xO3 -xspace -xildoff   -DVERSION=\ 3.0002\ -DXS_VERSION=\3.0002\
  -KPIC
  -I/usr/per
  l5/5.8.4/lib/sun4-solaris-64int/CORE   dbdimp.c
  /usr/bin/perl -p -e s/~DRIVER~/mysql/g
  /usr/perl5/site_perl/5.8.4/sun4-solaris-64int/auto/DBI//Driver.xst 
  mysql.xsi
  /usr/bin/perl /usr/perl5/5.8.4/lib/ExtUtils/xsubpp  -typemap
  /usr/perl5/5.8.4/lib/ExtUtils/typemap  mysql.xs  mysql.xsc  mv
  mysql.xsc
  mysql.c
  Warning: duplicate function definition 'do' detected in mysql.xs, line
  224
  Warning: duplicate function definition 'rows' detected in mysql.xs, line
  559
  cc -c  -I/usr/perl5/site_perl/5.8.4/sun4-solaris-64int/auto/DBI/
  -I/opt/mysql/mysql/include -xO3 -mt -D_FORTEC_ -xarch=v9 -xc99=none
  -DDBD_MYSQL_INSERT_ID_IS_G
  OOD -g  -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -xarch=v8 -D_TS_ERRNO
  -xO3 -xspace -xildoff   -DVERSION=\ 3.0002\ -DXS_VERSION=\3.0002\
  -KPIC
  -I/usr/per
  l5/5.8.4/lib/sun4-solaris-64int/CORE   mysql.c
  Running Mkbootstrap for DBD::mysql ()
  chmod 644 mysql.bs
  rm -f blib/arch/auto/DBD/mysql/mysql.so
  LD_RUN_PATH=/opt/mysql/mysql/lib:/lib

Re: encrypt data

2005-11-26 Thread Jake Peavy
Also, do you know if there is any way at all to do the tiniest amount of
research on my own to find out even the most basic information about MySQL
without bothering an entire worldwide mailing list with my trivial
questions?


Re: Named Pipe for General Query Log

2005-11-14 Thread Jake Peavy
Hey, I sent this a while ago, but never received a response.

This still seems to exist under 5.0.15-standard (at least under
mysql-standard-5.0.15-linux-i686-glibc23)

Can anyone from MySQL comment on this or should I open it as a bug?

Thanks,
JP

On 6/11/05, Jake Peavy [EMAIL PROTECTED] wrote:

 Has anyone been able to use a named pipe for their general query log
 (or any of the other logfiles for that matter)?

 I tried the following as user mysql:

 rm /var/lib/mysql/myhost.log
 mkfifo -m 0660 /var/lib/mysql/myhost.log

 but the mysql server would not start.

 I think it would be very useful to be able to use a FIFO for this so I
 can use the log for debugging/info without having to create a log
 rotation script.

 I am running 5.0.2-alpha-standard on linux on i386.

 Thanks.
 F



Read Only Data Directory

2005-09-21 Thread Jake Peavy
Hey yall,
 I'm trying to use a DVD (containing a number of packed MyISAM tables in a
directory) as the datadir in my my.cnf but I can't start the server: mysqld
is trying to write to the datadir - error log, etc - and when it can't the
server craters.
 Is there anyway of using a readonly directory as the datadir? Are there
other options I can specify in my.cnf where I can tell mysqld to write the
error files/etc?
 TIA,
Jake


Re: Read Only Data Directory

2005-09-21 Thread Jake Peavy
Hey, sorry everyone - I figured it out.
 I had to add the following to my.cnf
 read-only
 skip-innodb
 I guess I shoulda kept googling before asking... mybad.

/JP

 On 9/21/05, Jake Peavy [EMAIL PROTECTED] wrote:

 Hey yall,
  I'm trying to use a DVD (containing a number of packed MyISAM tables in a
 directory) as the datadir in my my.cnf but I can't start the server:
 mysqld is trying to write to the datadir - error log, etc - and when it
 can't the server craters.
  Is there anyway of using a readonly directory as the datadir? Are there
 other options I can specify in my.cnf where I can tell mysqld to write the
 error files/etc?
  TIA,
 Jake



Re: Wrf files: how can I read them?

2005-06-17 Thread Jake Peavy
On 6/17/05, asteddy [EMAIL PROTECTED] wrote:
 Hello,
 I have found Mysql Performance Tuning Seminar available for download, but I 
 don't know how to see it. 

Use the WebEx player at http://www.meetingcenter.net/record_play.htm

-JP

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: alter only an enum label

2005-06-17 Thread Jake Peavy
On 6/16/05, Gabriel B. [EMAIL PROTECTED] wrote:
 If i have a table with about 800M records. and one of the fields is a
 enum(a, b, c) and i want to change it to enum(a,b,x) 

My understanding (such as it is) is that the best way to do this is to add a 
second column with enum('a','b','x') and set the value using the integer 
value of the first column as follows. (adding 0 casts enum as integer)

ALTER TABLE table ADD new_column enum('a','b','x');
UPDATE table SET new_column = old_column + 0;
ALTER TABLE table DROP old_column;

Execution time is dependent on the speed of the update, but it would need to 
read every row.


Named Pipe for General Query Log

2005-06-11 Thread Jake Peavy
Has anyone been able to use a named pipe for their general query log
(or any of the other logfiles for that matter)?

I tried the following as user mysql:  

rm /var/lib/mysql/myhost.log
mkfifo -m 0660 /var/lib/mysql/myhost.log

but the mysql server would not start.

I think it would be very useful to be able to use a FIFO for this so I
can use the log for debugging/info without having to create a log
rotation script.

I am running 5.0.2-alpha-standard on linux on i386.

Thanks.
F

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]