cross-compiling mysql
I am trying to compile mysql 5.4 for windows on fedora using the cross compiler and I am stuck with this error: checking for termcap functions library... configure: error: No curses/termcap library found. I have looked and saw that I should install ncurses-devel (ncurses-devel-5.7-2.20090207.fc11.x86_64.rpm) but after doing this it still errors out. Thanks, Seth
Need a Brief Overview - SSL Connections
Howdy, I've read through all of the MySQL docs on SSL and I just need a brief overview of a few things to understand some things that aren't crystal clear to me since I'm not very familiar with SSL. 1) Which SSL options are *required*? It seems that only ssl-key is _always_ required for the client to connect to the server. If REQUIRE X509 is set, then ssl-cert is required as well in order to authenticate who the actual client is, right? 2) The options I don't understand are ssl-ca/ssl-capath. Why would the client specify a certificate authority? Is this the authority (or authorities) that's used to authenticate the server's certificate? Is there a platform default for this value? I'm not used to having to specify a list of authorities for other programs to validate certificates (such as with email). 3) How does I know if the server/client authentication (validating the certificate against given authorities) failed? Do I just get a vague SSL connection error back from MySQL and that's it? I think that's mostly it. Thanks, -- Seth Willits -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Upgrading databases?
Hello, I'm having trouble copying a database from MySQL 4.1.22 to 5.1.19- beta. Both are FreeBSD i386-based machines. I have run the following commands: mysqlcheck --check-upgrade --all-databases --auto-repair mysql_fix_privilege_tables Both executed with no problems. (mysqlcheck reported OK for all tables.) When I try to access any of the tables, I get this: mysql select * from users; ERROR 1034 (HY000): Incorrect key file for table 'users'; try to repair it So I tried to repair it: mysql repair table users; +++-- ++ | Table | Op | Msg_type | Msg_text | +++-- ++ | seth_icsx_mands_live.users | repair | error| Incorrect key file for table 'users'; try to repair it | +++-- ++ 1 row in set, 1 warning (0.10 sec) Running repair table users doesn't seem to have any effect on it because the problem persists. I have tried to run mysql_upgrade, with no success: # mysql_upgrade --basedir=/usr/local --datadir=/var/db --verbose Looking for 'mysql' in: mysql FATAL ERROR: Can't find 'mysql' I have tried it with all different combinations for the two directory options with no luck. All tables are MyISAM. Can anyone shed some light on what I'm supposed to do? Thanks, Seth -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Upgrading databases?
On Jun 21, 2007, at 12:21 PM, Gerald L. Clark wrote: Seth Seeger wrote: Hello, I'm having trouble copying a database from MySQL 4.1.22 to 5.1.19- beta. Both are FreeBSD i386-based machines. I have run the following commands: mysqlcheck --check-upgrade --all-databases --auto-repair mysql_fix_privilege_tables Both executed with no problems. (mysqlcheck reported OK for all tables.) When I try to access any of the tables, I get this: mysql select * from users; ERROR 1034 (HY000): Incorrect key file for table 'users'; try to repair it So I tried to repair it: mysql repair table users; +++-- ++ | Table | Op | Msg_type | Msg_text | +++-- ++ | seth_icsx_mands_live.users | repair | error| Incorrect key file for table 'users'; try to repair it | +++-- ++ 1 row in set, 1 warning (0.10 sec) Running repair table users doesn't seem to have any effect on it because the problem persists. I have tried to run mysql_upgrade, with no success: # mysql_upgrade --basedir=/usr/local --datadir=/var/db --verbose Looking for 'mysql' in: mysql FATAL ERROR: Can't find 'mysql' I have tried it with all different combinations for the two directory options with no luck. All tables are MyISAM. Can anyone shed some light on what I'm supposed to do? Thanks, Seth Shut the server down and run myisamchk on users.MYI Sadly, no success. I tried running it two different ways: # myisamchk -e -r users - recovering (with sort) MyISAM-table 'users' Data records: 1283 - Fixing index 1 Found block with too small length at 101420; Skipped # myisamchk -c -r users - recovering (with sort) MyISAM-table 'users' Data records: 1283 - Fixing index 1 I still get the same error when I try to access the table. Thanks, Seth -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Limiting the number of fulltext indexes
I have a table (example 1): tblRegulations fldCountryID (int) fldRegualtionType (int) fldRegulationUpdated (date) fldRegulation (text) Each country can have up to 7 different regulation types. I have designed the table like this so I can have one fulltext index on fldRegulation, and thus search all the regulations for all countries in one hit. If I designed the table like this (example 2): tblRegulations fldCountryID (int) fldRegualtion_1 (text) fldRegulation_1_Updated (date) fldRegualtion_2 (text) fldRegulation_2_Updated (date) fldRegualtion_3 (text) fldRegulation_3_Updated (date) … fldRegualtion_7 (text) fldRegulation_7_Updated (date) I would need 7 full text indexes, one for each of fldRegulation_X. And if I wanted to search any combination of regulation types I would need 2^7 full text indexes! So I have decided my table design will be the first example. Now for my problem: I want the return a result from the first example table like this: Results fldCountryID (int) fldRegulation_1_Updated (date) fldRegulation_2_Updated (date) fldRegulation_3_Updated (date) fldRegulation_4_Updated (date) fldRegulation_5_Updated (date) fldRegulation_6_Updated (date) fldRegulation_7_Updated (date) How would I write such a query? Am I trying to do the impossible? I want only one fulltext index but it seems like I can only get the summary results if I design my table as in example 2. I will greatly appreciate any advice. Best regards, Seth -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
unique by field issue
I constantly bump up against this issue and am looking for a good solution. I want a query that will return multiple rows, but only one row per unique identifier in one field. For instance, if I have the following table: Name | Food | Eat_Date Bob | Hot Dog | 2005-06-25 Jan | Pizza | 2005-06-27 Jim | Salad | 2005-05-25 Bob | Soup | 2005-06-03 Ann | Toast | 2005-06-13 Jim | Pizza | 2005-06-28 I want to be able to pull the most recent food by each name without running multiple queries. I need a query that would give the results: Bob | Hot Dog | 2005-06-25 Jan | Pizza | 2005-06-27 Ann | Toast | 2005-06-13 Jim | Pizza | 2005-06-28 Anyone have ideas of how to do this in one query? Thanks, Seth -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
mysqlbinlog troubles.
hello, I'm trying to restore a table from a full back up and then a binlog. this is a test table setup specifically for this. I have 34 rows in the full backup, another 5 in the binlog. I find the date of the last insert and use this as the --start-datetime for mysqlbinlog. The problem is that I always get the same error error: table foo already exists. Shouldn't it just be updating from the start-datetime and not trying to create/drop the already existing table? Sorry if this is a simple one, been searching for this for awhile now. # /usr/local/mysql/bin/mysqlbinlog --database=backuptest --start-datetime=2005-03-30 15:00:00 /data/mysql/logs/dbne1-bin.173 | /usr/local/mysql/bin/mysql -u seth -p backuptest Enter password: ERROR 1050 at line 10046: Table 'foo' already exists thanks. -seth -- Seth Itschner SparkNotes [EMAIL PROTECTED] V: (212) 633-3555 F: (212) 727-4827 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
problem with distinct not solved by group by
I have three tables: reviews users movies I am trying to select the latest 4 reviews for DIFFERENT movies. I can use the following query: SELECT reviews.movies_id, movies.movie_title, users.name, reviews.rating, reviews.post_d FROM reviews, users, movies WHERE reviews.user_id = users.user_id and reviews.movies_id = movies.movie_id ORDER BY post_d DESC LIMIT 4 However, this can return the same movie twice if any of the last 4 reviews are of the same movie. DISTINCT is no help because I only want a distinct on movies_id, not the whole row. GROUP BY movies_id is no help because it takes the oldest review from the unique movies_id. Does anyone have an idea where I can take the most recent 4 rows that have a different movies_id without doing extra processing work in PHP? Thanks, Seth -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: auth and ssh tunnel
I have successfully used this technique on a RedHat server. (I have also seen it fail on Mac OSX running VirtualPC.) What are you using to access MySQL? The command line? (Make sure it's configured to use port 3306 and not looking for a local socket file.) What are the errors given? Can you telnet to localhost:3306? Seth On Sun, 10 Oct 2004 17:33:18 -0500, Carl Karsten [EMAIL PROTECTED] wrote: I am trying to connect to MySql with an SSH tunnel that terminates on the same box MySqld is running on. I would expect that I do L3306:localhost:3306 and grant access to [EMAIL PROTECTED] This doesn't seem to work. I got this to work on a test box: L3306:my.sql.IP:3306 grant access to [EMAIL PROTECTED] but I don't really want to expose the box like that. I realize that a firewall could block any real exposure, but I don't have that kind of authority over the box. Is there a howto for this kind of setup? Carl Karsten http://www.personnelware.com/carl/resume.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: php pages not inserting data into table
Could this be a 'register_globals' issue? (Check your php.ini file.) Are you sure that the queries you're sending are correct? Is mysql_query() returning any errors? Some more information, please! Seth On Fri, 1 Oct 2004 23:35:09 -0500, tom miller [EMAIL PROTECTED] wrote: i'am not sure if this is related to suse 9.1 or what i'am beginning with mysql and php i'am running: mysql 4.0.21-standard suse9.1 kernel version 2.6.5-7.10b-default php version 4.3.4 - default install apache 2.0.49 - prefork i created an address book using php and when i go thru and fill in all the required feilds and click on submit it just resets the form. i was previously using this on mandrake 9.2 and it worked flawlessly however i was using apache 1.3 i fifgured it was something worng in my php scripting but i took a sample calculator from php.net and it it too was not passing data to mysql i have searched google many times over and different forums and never found my answer or i'am going in the wrong direction. i was looking into mysql modules that apache uses but that lead to dead ends. looked into how php handels the data and that too lead me to a dead end. i wiped my machine out and reinstalled the os figured i missed something or there was a corrupt file but that apparently was not the case. if some one could make some helpful suggestions as to whats causing this i would be much appreciative thanks -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: the table is read only
Hi, I just encountered a similar problem on one of my production servers this morning. (I'm still investigating the cause.) After doing a quick bit of Google-searching, this solved my problem: mysqladmin -u username -p flush-tables By the way: All directories in /var/lib/mysql should have 700 permissions (owned my the mysql user) and everything within those directories should be 660 (owned by the mysql user and mysql group). (This was on a FreeBSD 4.8 server running MySQL Server 3.23.58) Hope this helps, Seth On Wed, 22 Sep 2004 08:36:21 +0800, [EMAIL PROTECTED] wrote: Hi I have installed mysql some software on aix5.2 . the edition is MySQL-3.23.58-2.aix5.1.ppc but when I used phpmyadmin to manage the mysql it told me ** #1036 - Table 'gbook' is read only *** while I have granted the data dir(/var/lib/mysql) 777 permission how to resolve it ? help me please ,how to resolve thanks a lot Yours LiRui [EMAIL PROTECTED] 2004-09-20 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Update select fields in all rows
I am fairly new to MySQL and I have been trying to form a way to have select rows numbers put back to 0, sort of like a reset in some rows of the table. The table is ladder_1 and I am trying to have the fields rank,lrank,wins,loss,games,percent,streak,points,kills on all the rows updated back to 0 as a number on all rows. If anyone could email me at [EMAIL PROTECTED] and assist me or for a better explanation if not understood, it will be greatly appreciated. -Seth www.SocomZone.com
wait_timeout
The manual describes wait_timeout thusly: The number of seconds the server waits for activity on a non-interactive connection before closing it. On thread startup, the session wait_timeout value is initialized from the global wait_timeout value or from the global interactive_timeout value, depending on the type of client (as defined by the CLIENT_INTERACTIVE connect option to mysql_real_connect()). See also interactive_timeout. What is defined as 'activity'? For example, if the client connection issues a query which requires 6 minutes for the server to process, will it timeout with a wait_timeout of 5m, or does wait_timeout understand that the query has not yet been completed?
Problems maintaining large high-availability search
I have been maintaining a mod-perl/MySQL4 web search engine with about 3M records, which gains and drops about 100k records daily. It runs on a dual-Opteron 242 system with 8GB RAM, 15k SCSI driv, SUSE Professional for AMD64. It recently grew to this size (from about 1M records), and I am encountering some problems scaling. 1. The timeout is set to 5 min, because of the number of queries, there are a lot of unused http processes that linger with connections, and the only way to seeminly keep MySQL connections available is to keep timeouts short. However, I have updates that take over an hour at a time, and the 5 min timeout will kill the update process. If I change the timeout to 2 hours, Apache will eat up all the connections. Im a little confused at the behaviour. Any suggestions appreciated. It seems to me the only answer is to maintain a completely seperate MySQL server with a 2 hour timeout on the same box, hotcopy the database, do the updates, and hotcopy it back, which I would *really* prefer not to do. There has got to be an easier way - any suggestions? 2. The other problem I have is that because I am rotating so many records daily and the queries are so complex and the tables/indexes so large, I want to keep the db OPTIMIZEd, but running an OPTIMIZE TABLE on this database also takes over an hour and creates timeouts on queries. At the same time, I want to get the optimize done as soon as possible. Again, a separate server would solve this problem, but I am really struggling for a way to not do that - as much as I love MySQL, I really hate installing, configuring, maintaining parallel permissions on 2 servers. Especially since MySQL needs maintainence so infrequently, every time I need to configure it it is like learning the entire process all over again from scratch, and I always goof on some little detail that doesnt raise its ugly head right away. If there is another option ( or maybe I shouldnt be OPTIMIZING every day?), suggestions appreciated. PS I do not have any problems with the query performance.
Windows bug with ALTER TABLE and LOCK
I have been experiencing a small problem with altering a table, once it has been locked. If you alter a table (adding a column) while that table is locked, subsequent calls to that table result in MySQL erroring, saying the table was not locked. I have reproduced this under Windows 2000, running the following versions of MySQL Server: 3.23.58-nt, 4.0.3-beta-nt, and 4.0.18-nt. (The bug does NOT show itself under any Linux versions.) Here is some code to demonstrate the problem: - CREATE DATABASE IF NOT EXISTS locktest; USE locktest; DROP TABLE IF EXISTS locktest; CREATE TABLE locktest (a TINYINT, b TINYINT); LOCK TABLE locktest WRITE; INSERT INTO locktest SET a=1, b=2; ALTER TABLE locktest ADD c TINYINT; # This next line fails under Windows INSERT INTO locktest set a=2, b=3, c=4; UNLOCK TABLES; All runs well, until the 2nd INSERT line, which returns with this message: ERROR 1100: Table 'locktest' was not locked with LOCK TABLES Has anyone else experienced this problem? (I can work around this by faking a locking mechanism of my own, but I would really rather not.) Thanks you, Seth -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: 64-bit Linux MySQL and ramdisks
If you are using MySQL 4.x, using a RAM disk isn't going to speed up similar Selects very much because it caches them. If you are doing a lot of different Selects then it will help. Its a web search engine with 10-15 search options, so yes the selects will vary and very often every row will need to be examined. Also as I mentioned the ramdisk will be used to speed up a very long insert/optimize batch. Thanks for the tip on the tempfile! There are hardware based RAM cards that don't steal memory from the OS, that will solve the problem but are still fairly expensive. Yeah they also defeat the purpose of using Opteron chips with on-chip memory controllers for high memory bandwidth. The only advantage of those cards that I know of is that they have a seperate power supply so they are less risky, but we will be backing each update up to disk anyway. I understand Linux's ramdisk implementation is stable, we are also using ECC ram. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
64-bit Linux MySQL and ramdisks
After some research, I chose a dual Opteron MySQL server for my new search box. For storage, I only update the tables once a day, so I plan on putting my data directory on a Linux ramdrive and backing it up to the hard disk after each update for maximum performance. If anyone has any reccommendations for dual-opteron 64-bit Linux distributions or any experience or tips on running MySQL on a ramdrive I'd like to hear them. Thanks Seth
Fw: 64-bit Linux MySQL and ramdisks
- Original Message - From: Seth Brundle [EMAIL PROTECTED] To: Eric [EMAIL PROTECTED] Sent: Wednesday, January 28, 2004 5:50 PM Subject: Re: 64-bit Linux MySQL and ramdisks I am curious, is there really that big of a benefit to using a ramdisk this way? It depends on your usage. We need to do a great deal of unindexable %wildcard% text searches on every row (no, we cant use FULLTEXT as nonword substrings are a requirement), a huge daily insert batch which we want to complete asap, followed by very long OPTIMIZE TABLE which is also asap. We also have plenty of RAM to host the table on ramdisk and not worry about disk swapping. So yes, we expect to save a good deal of time though eliminating hard disk latency on db operations. We have a daily batch process which needs to be finished in 10 wall clock hours and takes about 90 parallel-process-hours to complete (and is expected to grow significantly next month), so we are big on optimizing every link in the chain. This portion is a single-process operation and may save us up to 2 wall clock hours during the update plus query speedup. If your tables can be effectively indexed and your query times are acceptable and you can save yourself some RAM by only tweaking MySQL, thats preferable. Also if you are doing updates thoughout the day you wouldnt want to use ramdisks as you need to back up the table after updates since RAM is volitile. Neither is the case for us. This is new for us though and all theory based on some reccommendations we received from other people who have told us MySQL performs very well on ramdisks, and through benchmarks of memory throughput on Opteron chips. Since our tables are only 2GB in size there is no risk in our trying it out. I will post results. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
re: mysqlhotcopy
I'm running mySql 3.23 on OS X 10.2 and I've reached a bit of an impassse with mysqlhotcopy, after having to re=install dbd and dbi, I keep getting the error: DBD::mysql::db do failed: You have an error in your SQL syntax near '` READ, `trypanofan`.``construct`` READ, `trypanofan`.``dapi`` READ, `trypanofan' at line 1 at /usr/bin/mysqlhotcopy line 438. there appears to be a workaround in the manual (http://www.mysql.com/doc/en/mysqlhotcopy.html) which indicates it's the method of quotation and suggests modifying 'quote_names' - unfortunately I have no idea where 'quote_names' is or where it's supposed to be. any suggestions would be lovely... thanks -s -- __ Seth Redmond DNA resource and Database Curator Wellcome Trust Laboratories for Molecular Parasitology Department of Biological Sciences Imperial College London SW7 2AZ [EMAIL PROTECTED] __ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: load_file
sorry, should have said - I get the record created, but with a null value in the blob - as you would for a file larger than the max_allowed. but I've checked numerous times (and have gone up to longblob) and still can't load above around 1mb. I should also mention that it's running on the same machine, so I shouldn't be having any problems from the server. -s On Friday, June 27, 2003, at 02:06 PM, [EMAIL PROTECTED] wrote: Seth Redmond [EMAIL PROTECTED] wrote: I've been having a few problems loading blobs via the LOAD_FILE function I've tried things along the lines of: UPDATE table SET binaryRecord=LOAD_FILE(/Users/shared/filename.mov) where refNo=1; It only seems to happen when loading something over 1-1.5mb, as I've been able to load smaller files no problem but can't do anything with files of around 4mb. I'm using mediumblob (and have tried long), max allowed packet is set at 16mb. The files are readable by all, and I have file permissions Did you get any error message or what? -- __ Seth Redmond DNA resource and Database Curator Wellcome Trust Laboratories for Molecular Parasitology Department of Biological Sciences Imperial College London SW7 2AZ [EMAIL PROTECTED] __ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
re: load_file
I've been having a few problems loading blobs via the LOAD_FILE function I've tried things along the lines of: UPDATE table SET binaryRecord=LOAD_FILE(/Users/shared/filename.mov) where refNo=1; It only seems to happen when loading something over 1-1.5mb, as I've been able to load smaller files no problem but can't do anything with files of around 4mb. I'm using mediumblob (and have tried long), max allowed packet is set at 16mb. The files are readable by all, and I have file permission. I'm basically out of ideas... -s -- __ Seth Redmond DNA resource and Database Curator Wellcome Trust Laboratories for Molecular Parasitology Department of Biological Sciences Imperial College London SW7 2AZ [EMAIL PROTECTED] __ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Could we make this a web discussion forum?
OK, I'm closing this thread, as Dan has been the only one who has provided a link to exactly what I was looking for. I still cant post through Google Groups, but at least I dont have to download 40 email messages a day now. http://www.gmane.org Thanks Dan! Dan Nelson [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] In the last episode (Mar 31), Seth Brundle said: First people, please dont get all angry about my suggestion...please hear me out... I really hate on-line forums. They're difficult to track because I must remember visit them daily. OK this is the first issue - 95% of people who need MySQL info do not need it daily. Mailing lists are a poor tool for them. And I can't use them while disconnected. H...seems like some Yahoo! employee has never used Yahoo! Groups ;) There is plenty of web discussion software (like Yahoo! Groups, although I'm not necessarily recommending that one) where you can still have all of the features of a mailing list, yet also have all the features of web discussion, so there really is no reason to have a mailing list only, except to be old-school. Yahoo groups are really mailinglists, though, so you wouldn't buy anything by moving to them. Their web interface is also one of the worst imho; I preferred the original Findmail interface (circa 98 before they got bought by eGroups, before /they/ got bought by Yahoo :) I believe there is already a read-only NNTP mirror of this group. The NNTP mirror is actually a double-edged sword - yes it takes care of archiving and searching (via Google Groups), I will agree, but it misleads infrequent users into thinking posting actually does something meaningful. gmane ( http://www.gmane.org or nntp://news.gmane.org ) mirrors 10 mysql mailinglists, and is a 2-way system so you can post from it. The preferred access method is with a newsreader, but there's also a web interface. -- Dan Nelson [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Could we make this a web discussion forum?
I really hate mailing lists - they are dinosaurs for this type of thing. A web discussion board is a much more powerful and flexible tool for this type of community. Thread tracking, archiving, searching, dont have to download 50 emails a day just to monitor a couple threads...cmon its 2003! Cant we convert this into a discussion board or better yet just make the usenet group mirror postable from google groups? There are 4 lists like this one where I have to continuously subscribe and unsubscribe throughout the year when I want to participate. Also emails get mirrored on google groups for convenient spam harvesting - its really a pain in the butt. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Could we make this a web discussion forum?
First people, please dont get all angry about my suggestion...please hear me out... I really hate on-line forums. They're difficult to track because I must remember visit them daily. OK this is the first issue - 95% of people who need MySQL info do not need it daily. Mailing lists are a poor tool for them. And I can't use them while disconnected. H...seems like some Yahoo! employee has never used Yahoo! Groups ;) There is plenty of web discussion software (like Yahoo! Groups, although I'm not necessarily recommending that one) where you can still have all of the features of a mailing list, yet also have all the features of web discussion, so there really is no reason to have a mailing list only, except to be old-school. A web discussion board is a much more powerful and flexible tool for this type of community. Hmm. E-mail has been around for a long, long time. I'm skeptical of this claim. Just because its been around longer means its better? Or should never be augmented? I'm just saying that usenet groups and discussion forums solved the problem of belonging to too many mailing lists. If every package I used involved a mailing list for discussion, I would need to track about 40 of them. Right now the only mailing lists I really am stuck using is MySQL and ImageMagick. ImageMagick is the worst, as there is no archive (well, there are monthly gz's - big help - gah) and requires subscription. If you want to ask a single question, you are obliged to subscribe and suck down all the daily traffic (more traffic then MySQL) until your thread is dead. I guess my hatred for mailing groups has more to do with that list then this one. Every time I want to get involved I have to go figure out - OK, h trying to remember do I need to subscribe? Whats that process? Thread tracking, archiving, searching E-mail gives you all of those. Unless you have the entire history of the mailing list downloaded to your mail spool and have a very powerful, feature-rich, and most importatly very fast email search tool, you cant both search archives and post messages with the same piece of software. Email is ok for thread tracking if subject integrity is maintained, but most email clients cant reliably collapse and expand threads. Cant we convert this into a discussion board or better yet just make the usenet group mirror postable from google groups? I believe there is already a read-only NNTP mirror of this group. The NNTP mirror is actually a double-edged sword - yes it takes care of archiving and searching (via Google Groups), I will agree, but it misleads infrequent users into thinking posting actually does something meaningful. There are 4 lists like this one where I have to continuously subscribe and unsubscribe throughout the year when I want to participate. Why must you subscribe and unsubscribe to participate? There's no membership requirement to post here. And you can always read the messages on-line. There are at least 2 web archives of this group. You've seen those, right? I know that Google has. Sorry, I didnt realize that I didnt need to subscribe to post - but again, this is one of those rules for each mailing list that you must remember, which is a PIA. Yes, if you live here, it is not a problem of remembering, but 95% of people who use MySQL dont require that level of involvement. Also emails get mirrored on google groups for convenient spam harvesting - its really a pain in the butt. How would using a web forum help that? Because most modern web forum software has an option to hide your email address. Also a web forum has a lot of other advantages: -Easily break discussion into sub-categories (especially for MySQL, where SQL questions are combined with everything else, would be nice to break that out) -Sticky posts with general, important, and first-timer information -Better threading, allowing subject change and collapsing/expanding threads -Web Forums have the search box and a link menu to other resources at the top (source downloads, documentation, related sites) rather then needing to switch between web and email applications to integrate resources. -Dont break long URLs. -Heavily quoted emails easier to read. -Thread listings can show #replies, views, age, even have ratings and show OP. -Facilitate private messages without using email. Also, a lot of people are adverse to web collaboration because most free discussion software is crap. It is either not very powerful, poorly designed, or difficult to install and/or maintain. There is very powerful stuff out there that is very well designed, you just need to kick down for it. FuseTalk seems like an excellent package. FatWallet.com uses it. I'm sure there are more relevant implementation examples as theirs is kinda hokey and fun, which isnt specific to the software package itself, but the features are astounding and very useful. I love its ability for users to rate thread quality. SourceForge
Re: Could we make this a web discussion forum?
I really hate on-line forums. They're difficult to track because I must remember visit them daily. Agreed. I don't need yet another web page to keep track of. I prefer to let my mail reader thread up the messages on this list, then I browse through every now and then and delete everything with a subject that doesn't look interesting. It's much more convenient than a web forum, and much faster too, because Email lists are essentially push technology. It assumes that becuase you are interested in something, you are insterested in it every day. There is absolutely no information that I need pushed to me every day. If I want it, i will go there and participate. Think of it this way - lets say you use 1,000 pieces of software over the course of the year - this is easy to imagine if you consider ls(1) to be a piece of software. Its pretty easy to imagine how impossible it would be to maintain subscription to 1,000 mailing lists (I get annoyed with 10!). So what makes MySQL so special? I don't have to wait for some remote, overloaded server to respond. I dont understand this one at all. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Could we make this a web discussion forum?
It's much more convenient than a web forum, and much faster too, because I don't have to wait for some remote, overloaded server to respond. This makes no sense as message I post to a Yahoo! Group get emailed to opt-in members and appears on the group within seconds, while I may not receive something I posted to the MySQL list for minutes or sometimes hours. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Could we make this a web discussion forum?
Most ppl don't like forums. I disagee with that, but then again I probably dont have any more idea then you do. Do you know even one serious project which uses a forum? I know Slackware did and it really was a PAIN. All of the projects on SourceForge.net? Also, by forum I also mean usenet forum, so I would name about 100 pieces of software that are supported there. Mail to this list gets mirrored on more places then you can imagine, same goes for PHP lists. You can't blame this on the MySQL ppl. There would be less mail if ppl would care to do their homework. There would be less mail if searching archives was done with the same piece of software as posting. Email is convenient, it just gets into your mailbox and you read it.. A forum would require ppl to open a browser or newsreader. Seriously, are you telling me that there isnt a browser open on your desktop, or that that is a special request? I have a browser open more often then an email client. I would argue that opening my email client is more of a special request for me then opening a browser - also consider that an enormous amount of people are using webmail for anaonymous subscrption to stuff like mailing lists (if not all email) for basic privacy and to curb spam. If you arent a friend of mine, you sure as hell dont know my pop email address. If you're busy then it's not fun. I'm busy how about a 87-hour work week last week? And I dont like email forums because they arent fun. I need to do much more scanning and management of an mailing list (whether I choose participate daily or once a year) then I do with say, Perl, where I login to google groups and search the entire history of the group and/or post within about 30 secs - and I get more and better answers overall. I dont have to remember what that group's posting email address is either. Also I would say that USENET groups get more traffic because the threshold to access is much lower. A lot of people argue that that means more noise, but I find that regardless of the noise, more access=more questions=more answers=more solved problems=bigger community=better software. I rarely ever have to post to Perl because I can always find the answer in the USENET archive, but I can rarely find an answer to my MySQL questions from the mailing list archive. I know at least part of that has to do with aversion to mailing lists by Joe Schmoe, if not JZ. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Could we make this a web discussion forum?
No but those in need of help does. And those ppl who are capable of helping them are usually busy. Do they keep a browser open and refresh the forum frequently? I'll answer that question: no. Many forums packages have the ability to distribute the discussion as a mailing list as well (Yahoo! Groups being one). You dont need to lose your mailing list functionality. The NNTP mirror is actually a double-edged sword - yes it takes care of archiving and searching (via Google Groups), I will agree, but it misleads infrequent users into thinking posting actually does something meaningful. Why don't you build a better one? Why not just make is RW? IIRC sendmail list has both posting via NNTP and mailinglist. Helpful for only those who know about it (few). -Easily break discussion into sub-categories (especially for MySQL, where SQL questions are combined with everything else, would be nice to break that out) PHP has several different lists and most questions end up on the general list. Usually cross-posted to other lists. Why do I have to go to different places for different features of the same forum? SourceForge is a great idea, but a poorly designed system. Obviously a package designed by an engineer, its a mess. You seem to be a brilliant coder so..perhaps you can re-design sourceforge and the underlaying code. Anyway, I expect an announcement from you real soon that you created a mail to NNTP gateway for this list.. You are going to build that.. right? If DataKonsult would bless it and offer to host it, I would write a SMTP-NNTP gateway, sure. I have written a good deal of USENET software, including a newsreader, so you probably baited the wrong programmer. That's not exactly a rocket science project for a decent perl programmer. I would rather just have the group made read-write and write a proxy for mailing list fans, because I know if I did this the mailing list would fall out of favor anyway (although it would still be there for diehards). As far as SourceForge goes - I'm not a bad coder, but I'm definitely good enough to know when to employ the help of a human interface designer before I spend a few months writing code. A lot of engineers, lacking a project manager, feel that they can handle design just because they are good coders. This is rarely true. A review of the range of quality of X-based application interfaces, regardless of the power of the application, is indicative of this. Again, no reason to be combative here, all I am suggesting is making a web or usenet home to make MySQL help more accessable. I know you want to argue that anyone who uses MySQL should know how the mailing lists and its various mirrors work, but in fact it is confusing and bothersome to many people if not you personally. I've worked with MySQL for about 7 years and have never found it convenient. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
MySQL database on a Linux ramdisk partition?
Is it possible to place MySQL data directories on a Linux ramdisk mount? - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
some gamma-gatcha sqlbench stats
I just built a P4 3.06 1GB DDR 333 15k U160 SCSI MySQL Linux server. I was dissapointed because I couldnt get the dual-channel DDR 400 MB I tried first to run stable :( dual-channel memory controllers definitely made a slight difference on the benchmarks with the same ram. alter-table 2 ATIS 13 big-tables 11 insert 736 select 424 wisconsin 3 I havent really tuned the server yet. BTW I have a 1.4GB table with a 648MB fulltext index. If anyone has server tuning settings for me, I'd like to hear em. I have the key_buffer at 500MB, bout all I've done so far. Next I am going to try compiling MySQL and DBI/DBD with the Intel 7.0 compiler. This compiler produced a 21% faster Perl for me then gcc. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
How to tell when mysql has frozen
I attempted to optimize a table last night, which usually takes about 5 minutes but I expected to take longer as I had done a slew of deletes. Well, it took over an hour, and from the last modified dates on the table and temp files, it looked like they hadnt been updated in 10-15 minutes. Eventually I gave up and disconnected, as I was on a laptop and had to disconnect from ssh. Today I found the tables corrupted, and am attempting a repair table, but I am in the same situation - the command is still running after 20 minutes and the table files have not been modified for a while (th eindex file's modified date is being updated but the filesize has not changed). mysqld is at 85% CPU in top. How can you tell when mysql is just taking a lot of time or has croaked/frozen/is stuck in a bad loop? - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Troubles with joining tables
Hi list, I am sure that there is an efficient way to do this in SQL, I just can't figure out what it is. I am dealing with two tables. One, I'll call table1 has about 90 columns and 20k rows, each number in column xy is unique. The other has about 90 columns and about 200k rows, and there will be around 10 duplicate entries in xy for each value of xy. So, if I SELECT a given number in table1.xy, one row is returned, then if I SELECT the same number in table2.xy, about 10 rows will be returned. There is always at least one entry in table2.xy for each entry in table1.xy I am looking for the best way to join them so for each row in the result is a row of table1 lined up with one (and only one) row from table2. A SELECT of everything would return 20k rows, in this case. In the WHERE statement there will be limitations put on both values in other columns in table1 and table2. The best I can think of is a LEFT JOIN, but it returns a table where every row in table1 has been duplicated about 10 times. (and I am hoping for it to return a table of every row in table1 joined with only one row of table2 where table1.xy=table2.xy.) I am aware that I could simply GROUP BY xy, but I am also trying to use this with a few AVG(some other column) clauses to get the average of all rows returned. GROUP BY xy only returns the average of each grouping. I am also guessing that GROUP By in this instance would be rather inefficient because It would first have to build a large table (200k rows) and then SELECT from it. I am using static MyISAM tables and everything in the table is either a float or an int. Is the best way for me to do this just to go ahead and do the GROUP BY, return a table of 20k rows, and then calculate the AVG (and COUNT, STD, MIN, and MAX) in my script? I was hoping to do this in a less time consuming fashion. I can calculate all of this in a timely manner when I am only using values from table1. ~Seth - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Troubles with joining tables (cont)
Since writing this message, I have discovered another possible way for doing this with two successive SQL statements. It would look something like this: 1) CREATE TEMPORARY TABLE temp SELECT DISTINCT xy FROM table2 WHERE all of table2 conditions; (going by data below, maximum of 20k rows because it is DISTINCT) 2) SELECT AVG(column1),all other group calculations FROM table1 LEFT JOIN temp ON temp.xy=table1.xy WHERE all of table1 conditions AND temp.xy IS NOT NULL; (maximum of 20k rows, less after both WHERE statements tho) Does this sound like the best way to accomplish this? Any suggestions? Thx, Seth On Sunday, March 2, 2003, at 03:41 PM, Seth Price wrote: Hi list, I am sure that there is an efficient way to do this in SQL, I just can't figure out what it is. I am dealing with two tables. One, I'll call table1 has about 90 columns and 20k rows, each number in column xy is unique. The other has about 90 columns and about 200k rows, and there will be around 10 duplicate entries in xy for each value of xy. So, if I SELECT a given number in table1.xy, one row is returned, then if I SELECT the same number in table2.xy, about 10 rows will be returned. There is always at least one entry in table2.xy for each entry in table1.xy I am looking for the best way to join them so for each row in the result is a row of table1 lined up with one (and only one) row from table2. A SELECT of everything would return 20k rows, in this case. In the WHERE statement there will be limitations put on both values in other columns in table1 and table2. The best I can think of is a LEFT JOIN, but it returns a table where every row in table1 has been duplicated about 10 times. (and I am hoping for it to return a table of every row in table1 joined with only one row of table2 where table1.xy=table2.xy.) I am aware that I could simply GROUP BY xy, but I am also trying to use this with a few AVG(some other column) clauses to get the average of all rows returned. GROUP BY xy only returns the average of each grouping. I am also guessing that GROUP By in this instance would be rather inefficient because It would first have to build a large table (200k rows) and then SELECT from it. I am using static MyISAM tables and everything in the table is either a float or an int. Is the best way for me to do this just to go ahead and do the GROUP BY, return a table of 20k rows, and then calculate the AVG (and COUNT, STD, MIN, and MAX) in my script? I was hoping to do this in a less time consuming fashion. I can calculate all of this in a timely manner when I am only using values from table1. ~Seth - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Troubles with joining tables
I'd better ask the obvious question... *which* one? Is there one in particular that you're after, or would you be equally happy with any of them? The latter would be a bit odd. Each row in table1 has several corresponding rows in table2. Any of them that match the query would be good. So the bulk of the selection criteria are on table2, not table1? It shouldn't matter either way (and I cannot tell which will be the bulk ahead of time). Wouldn't you get the same result from your query by using an INNER JOIN and dropping the temp.xy IS NOT NULL clause? I am not sure, that is why I am asking y'all. My only problem with that is does INNER JOIN create a 200k table with 180 columns and do the selection from that? Or does it SELECT from one table, JOIN the results to the other table, then SELECT again? (which is what I want) I have played around with it a bit and if I am doing my guess-tamating correctly, these two SQL statements will normally take less than a second to do the selection from the two tables and return the statistics. This compares very well with the .92 sec required to pull up the stats on all of table1. (the query in question will pull up some subset of table1) Another advantage is with other queries of this set I will have half of the query cached for me (depending on the query, of course) in the table temp. In case anyone is wondering with these massive tables, I am building a dynamic web site that will let scientists view statistics on various environments/landscapes and how they relate to each other. Currently the largest table is basically the entire state of Wisconsin split up into 6km x 6km chunks at 30m resolution. This yields 7k chunks to keep track of, and on average 10 different environments per chunk. I am trying to design it well enough though that we could in the future use all of the Landsat imagery for the entire U.S. (also 6km x 6km chunks at 30m resolution). There are about 90 different possible statistics per landscape. Thx, Seth On Sunday, March 2, 2003, at 06:31 PM, Bruce Feist wrote: Seth Price wrote: I am dealing with two tables. One, I'll call table1 has about 90 columns and 20k rows, each number in column xy is unique. The other has about 90 columns and about 200k rows, and there will be around 10 duplicate entries in xy for each value of xy. So, if I SELECT a given number in table1.xy, one row is returned, then if I SELECT the same number in table2.xy, about 10 rows will be returned. There is always at least one entry in table2.xy for each entry in table1.xy I am looking for the best way to join them so for each row in the result is a row of table1 lined up with one (and only one) row from table2. I'd better ask the obvious question... *which* one? Is there one in particular that you're after, or would you be equally happy with any of them? The latter would be a bit odd. I am aware that I could simply GROUP BY xy, but I am also trying to use this with a few AVG(some other column) clauses to get the average of all rows returned. GROUP BY xy only returns the average of each grouping. I am also guessing that GROUP By in this instance would be rather inefficient because It would first have to build a large table (200k rows) and then SELECT from it. This is complex enough so that I'll need a more detailed example of your query to explain what you're trying to accomplish. But, on the surface, it sounds as though you're already using GROUP BY in the same SELECT, in which case you won't be taking on a significant amount of additional overhead -- you're already scanning all the table2 rows because of your GROUP BY. Or, are you talking about multiple SELECT statements? Bruce Feist On Sunday, March 2, 2003, at 06:32 PM, Bruce Feist wrote: Seth Price wrote: Since writing this message, I have discovered another possible way for doing this with two successive SQL statements. It would look something like this: 1) CREATE TEMPORARY TABLE temp SELECT DISTINCT xy FROM table2 WHERE all of table2 conditions; (going by data below, maximum of 20k rows because it is DISTINCT) So the bulk of the selection criteria are on table2, not table1? 2) SELECT AVG(column1),all other group calculations FROM table1 LEFT JOIN temp ON temp.xy=table1.xy WHERE all of table1 conditions AND temp.xy IS NOT NULL; (maximum of 20k rows, less after both WHERE statements tho) Wouldn't you get the same result from your query by using an INNER JOIN and dropping the temp.xy IS NOT NULL clause? Bruce Feist - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before
copying databases to avoid insert holdups
I have a MySQL database I will be launching soon which has a search engine database accesable over DBI/mod_perl/Apace. It has about 2M records, and is a 4GB table with 1GB index. Every day I insert 76k records all at once and delete about another 76k. While these inserts are happening, the select performance is obviously impacted. What I want to do is to have two copies of the database, copying the updated version to the publicly-accessable one, i.e.: mysql stop rm -rf /mysql/data/publicdbdir copy -r mysql/data//insertdbdir mysql/data/publicdbdir mysql start My question is this: Is this enough to prevent me from experiencing slow inserts and queries during insert, or should I be firing up two seperate servers? Also - if I create the MyISM tables on 3.23, can copy them to a 4.0.10 server's data directory? (the reason i ask is that I have a 3.23 server available on that box) Thanks q - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Maximum of 16 indexes per table
Has the maximum number of keys in a MyISAM table been tested more since this post? Is it possible to get above 64 keys? (I'm going for 90, btw) ~Seth to get past spam filter: MySQL sql query Subject: Re: Maximum of 16 indexes per table From: Michael Widenius Date: Fri, 30 Jul 1999 23:57:19 +0300 (EEST) Troy == Troy Grady [EMAIL PROTECTED] writes: (The maximum number of keys are 32 because MyISAM uses a bitmap in a 'long' to check which keys are in use) Regards, Monty Troy Section 10.17 of the manual, Table Types, states the following about Troy MYISAM: Troy Maximum number of keys/table enlarged to 32 as default. This can be Troy enlarged to 64 without having to recompile myisamchk. Troy Are 64 indexes per table possible? If so, how? Troy Regards, Troy Troy Hi! To do this you have to change some constants in myisam.h and unireg.h and also change the key_map typedef from long to longlong. The main problem is that we haven't tested the code with more than 32 keys. The main problem is that I may have have missed some key_map variable and this is still declared as 'ulong' instead of type 'key_map'. In theory it wouldn't be that hard to change 3.23.2 to use 64 keys... Regards, Monty - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Maximum of 16 indexes per table
I have a table of between 90-100 columns and currently 70k rows. I don't care how long it takes to enter the data, but I would like to select it as fast as possible. Each column should be theoretically queried as much as any other. Right now I am using static MyISAM tables, which seem good enough for now, but in the future I may have approx. 3,500k rows, so I am a little more worried about my SELECT performance then. ~Seth On Tuesday, February 11, 2003, at 12:54 PM, Peter Grigor wrote: - Original Message - From: Seth Price [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, February 11, 2003 1:43 PM Subject: Re: Maximum of 16 indexes per table Has the maximum number of keys in a MyISAM table been tested more since this post? Is it possible to get above 64 keys? (I'm going for 90, btw) ~Seth Dood!!! 90??? on one table??? Peter ^_^ - Peter Grigor Hoobly Classified Ads http://www.hoobly.com sql - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
NaNs
Greetings. If I have a key on a float and attempt to insert a NaN I get ERROR 1034: Incorrect key file for table: 'foo'. Try to repair it if I attempt to update/delete that row. If I remove the key from the float it seems to behave when I try to update/delete the row. Is this a bug / known limitation in mysql-3.23.52? Thanks for any insight. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: Data Entry for a Newbie
I would buy a book called PHP Essentials. www.thickbook.com I used it to learn how to write an html/php interface to my databases and i use it all the time now. It takes a bit of work to get the interfaces built but the book is really easy to follow and understand. seth -Original Message- From: William Bradley [mailto:[EMAIL PROTECTED]] Sent: Thursday, July 25, 2002 9:13 AM To: Mysql Subject: Data Entry for a Newbie At the moment I have Mysql installed and understand command line entry of data, or entry via a text file. Either way, it is difficult, especially if you have been used to data entry screens. Is there a utility somewhere that would allow me to do this? Failing that, is it possible to write a html file to communicate with the server on my own computer? Any help is appreciated, Bill. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
RE: AW: mysql wont start on SUSE
In SuSE you should be able to start the MySQL server by: rcmysql start or rcmysql restart rcmysql is a symlink to the /etc/init.d directory -Original Message- From: John Macloy [mailto:[EMAIL PROTECTED]] Sent: Tuesday, July 02, 2002 2:26 AM To: [EMAIL PROTECTED] Subject: AW: AW: mysql wont start on SUSE I noticed that my Server works with [mysql] user = root and than start with mysqld but when I tried to start with the runlevel editor it doesn't works. Perhaps a other Person in this list no the answer? The mysql_install_db create two folders in the var/lib/mysql pfad. The folder are mysql and test. In mysql are the table definitions for passwords etc. If I solve the problem I will write you an Email John -Ursprungliche Nachricht- Von: Jacques Steyn [mailto:[EMAIL PROTECTED]] Gesendet: Montag, 1. Juli 2002 18:54 An: John Macloy Betreff: Re: AW: mysql wont start on SUSE Thanks John I've added that line, but no luck. Yes, I've done the mysql_install_db, which worked OK. I am baffled. Stupid question: mysql_install_db is run only once to set up default DBs??? Jacques John Macloy wrote: Yes, I have go to my.cnf and write [mysqld] user=root then start the server with mysqld. This works on my computer, but I'm not sure if the right way cause the the server shouldn't use with the user root but for a test it's ok. Have you already started the script mysql_install_db? It's necessary. Macloy -Ursprungliche Nachricht- Von: Jacques Steyn [mailto:[EMAIL PROTECTED]] Gesendet: Montag, 1. Juli 2002 15:28 An: Alexander Barkov Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Betreff: mysql wont start on SUSE Anybody had a problem with MySQL that does not start on SUSE 8 with default installation? If yes, what was the problem? Thanks Jacques ___ Jacques Steyn +27 (0)11 478-1259 http://www.knowsystems.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php -- ___ Dr Jacques Steyn +27 (0)11 478-1259 http://www.knowsystems.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Massive Research Data
Some background... We have Objects r1, r2 ... r1600 Which each have (~1600) regions d1, d2 ... d500 Which each have (~500) datapoints So, roughly we have around 786,000 datapoints on a given object. Typically, we might take 15 or so regions and take data from each of their datapoints (~7500) at ~1300 steps or environmental variations. So, each test will generate about 9.75M floats. A curve can be extracted from the 1300 steps (floats) for each datapoint. We then run these tests hundreds of times a day - sometimes on different objects, sometimes on the same ojbects (so you can analyse the curves through time) We have to date stored this raw data within compressed text files indexed by MySQL. However, as you can imagine, querying this data is a pain. We have to ask MySQL for all of the tests of a given object, then analyse the files to extract the appropriate curves - often opening up 100s of datafiles. The question is whether anyone has any more intelligent ways of storing this data within MySQL without busting MySQLs file size limits, or reasonable CPU/RAM contraints. Storing the data raw in rows doesn't seem like the most sane technique. Nor does throwing it into a char for each datapoint. What I'm looking for is a _lossless_ technique to store the curves within the database for each datapoint (we need to be able to extract and analyse data for each datapoint). My guess is that someone out there is doing scientific analysis similar to this using MySQL as their backend. The environment is RD so queries will be fairly limited. We obviously don't want queries to take down the DB server, but, at the same time this isn't the backend data to Yahoo! serving millions of requests every minute. Thanks! Seth - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Storing Raster Data
I'm curious if anyone has any recommendations for storing large multi-layered raster data within mysql beyond simply using blobs or text fields. Although not a GIS application; the idea is similar. We are simply modeling 3D surfaces. Queries on the actual mapping structure stored within the raster won't be frequent - reading the data (or large segments of it) is more important. But, I figured I would check to ensure there isn't some level of scholarship out there for mapping raster data to the relational model beyond just storing the rasters in full within blobs/text. Thanks in advance for any insight. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: columns no more
is there a way to delete a column from a table, without droping the whole table? Yep! ALTER TABLE tablename DROP [COLUMN] col_name http://www.mysql.com/doc/A/L/ALTER_TABLE.html If not is can I get sql to replicate the command to re create the table? Yep! mysqldump -qd -u user database tablename http://www.mysql.com/doc/m/y/mysqldump.html can I transfer information to a dummy table while I create the table I want? Yep! Just do a select into outfile http://www.mysql.com/doc/S/E/SELECT.html then a load data infile http://www.mysql.com/doc/L/O/LOAD_DATA.html Take care, seth - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Representing Object Oriented Data in a RDBMS
Greetings! I've been using MySQL for years now and love it's flexibility, scaleability, and general useability. However, I think I might be beginning to bump into a wall in cleanly and efficiently representing some data structures within merely a relational system. My hope is that someone on the list has had to deal with similar data and might be able to offer some insight into how to effectively and efficiently represent it within a relational, as compared to an object oriented DB structure so I don't have to start looking around for an alternative DB option. I need to draft a mechanism for storing _user defined_ Objects (users define characteristics, components of that object, events that occur to those objects etc.). So, a very simplified example might be that someone is defining what cut up pieces of sandwiches are and how they came about from their original Raw Materials (RM). RM1 -- \ RM2 ObjectTop / \ RM3 -- \ \ -- ObjectWidget1 \ / ObjectSandwich -- ObjectWidget2 RM1 -- / \ \ / -- ObjectWidget3 RM4 ObjectBottom / RM5 -- Where, (not programmatically or by an arbitrary table) a user would define say: ObjectTop as requiring RMS1 - 3, define it has having a name as something being of size something etc. etc. and so on in a similar fashion for ObjectBottom. The user might create an object ObjectSandwich which is composed of 1 ObjectBottom and 1 ObjectTop and have a characteristic of a name of something along with nth other characteristics. Finally, s/he might create multiple ObjectWidgets which descend from ObjectSandwich (think chopping up the sandwich into multiple pieces). This would be a two step process obviously. The user would DEFINE a generic ObjectSandwich for example (think, creating a class in code), and then would create specific instances of that generic object (think, create instances of your class and constructing your specific characteristics of that instance). The database would thus have no idea it was being told to store data about a sandwich, or, more specifically, pieces of that sandwich UNTIL the user DEFINES those objects for it. Each level would have to inherit the characteristics, events (you might heat the sandwich while it's still a sandwich; you might not), composition etc. of it's parents (all the way up the chain). So, you should be able to ask ObjectWidget what kind of and which specifically ObjectTop he had. However, I don't have the option of having a Bottoms table, and a Sandwiches table or a Widgets table since I won't know as a programmer what objects might be created by users. There would just have to be tables describing the compositions of any specified objects, the values of those specifications for each particular instance of that object (think creating instances of classes in an OO language). To add further complexity to the project I need to be able to reference external data into specific objects. For example, I might want to take pictures of an ObjectTop and associate that picture with that specific ObjectTop. I may want add multiple notes (not just a 1:1 relationship) about specific ObjectSandwiches and so on. This seems fairly intuitive within an OO model, but, it's more difficult to visualize how a strictly relational model would effectively and efficiently handle it (particularly traversing up inheritances defined by users without overrunning your database). Any ideas? - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Project + MyODBC
Has anyone successfully saved Microsoft Projects within MySQL? When I attempt to save a project I get a MyODBC error on the syntax, on, what I presume is one of their table creation SQL statements. Anyone have any insight? - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Uploading files using PHP to database.
Could anyone shed some light on how to go about uploading a text file with a web interface using php into a mysql database. wow, that was a mouthfull. I would much appreciate it. Well, that depends. Do you want to store the native text file fully intact say like you might store an image file or a word document, or, instead are you only interested in just the text. Secondly, what do you wish the the input UI to look like? Do you just want them to enter in a filename, or, is pasting into a text box sufficent? If you objective is merely for them to paste into a textbox the task is simple enough.. just create a table with a blob/text element and use your form to populate (standard PHP usage of forms/variables at work here of which I won't go into detail here on). If you goal is to merely accept filenames and go from there might I suggest you taking a look here: http://www.php.net/manual/en/features.file-upload.php Once you have your data from the user the question returns to whether you want to store the native file format (applies more if you are intending to handle formatted text such as in word, rtf, or some other format), or just the text. Depending on how large your text/data files are you might consider using your database as merely an indexing agent. This could be as simple as creating a database managed directory structure on one of your servers, placing your text files in there and then having the database point to them either by ID or store the location (the prior is probably ideal). Or, you could do the prior and add in some full text word indexing such as with a B-tree index. The point here is that your harddisk makes for a fairly good blob storage device.. no use not making use of it when compared to loading up your database with tons of data which doesn't add to the queryability of the data while limiting your database's ability to scale. Regards, seth --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
sql/table structure question.
, Seth --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Got timeout reading communication packets
Hi! How do you know that row is truncated ?? Filesizes for one. With just this single query the database is ca. 65K, the update log with just this query in it is ~600k. If you have binary data in the blob field, when you retreive a result set, you will get it back in the binary, unescaped form. I can't imagine there is any binary data in there. See the php script in the previous email.. it just increments a counter and stores in a sring. It also reacts the same way with a text field. Also, it's truncating right in the middle of one of the incremented numbers... 12770 12771 12772 12773 1 | Query the lengths returned to make sure that data is truncated. mysql select length(data) from measurement_extended_data; +--+ | length(data) | +--+ |65535 | +--+ 1 row in set (0.05 sec) [internal mysql]# wc /tmp/update.002 4 100046 589227 /tmp/update.002 If you are sure, send me CREATE TABLE statement and INSERT instead of REPLACE statement, so that I can test it. # here's the create statement.. the only thing really abnormal # is the size of the primary key (compound) CREATE TABLE measurement_extended_data ( content_typeID tinyint(3) unsigned NOT NULL default '0', contentID int(10) unsigned NOT NULL default '0', arrayID smallint(5) unsigned NOT NULL default '0', positionID mediumint(8) unsigned NOT NULL default '0', testID smallint(5) unsigned NOT NULL default '0', iteration smallint(5) unsigned NOT NULL default '0', test_locationID smallint(5) unsigned NOT NULL default '0', date_added int(10) unsigned NOT NULL default '0', added_by mediumint(8) unsigned NOT NULL default '0', data text NOT NULL, PRIMARY KEY (content_typeID,contentID,arrayID,positionID,testID,iteration), KEY test_locationID (test_locationID), KEY testID (testID) ) TYPE=MyISAM; Update log attached in seperate email. Take care, seth --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Got timeout reading communication packets
Re: large replaces truncating... I'm still trying to figure out what's going on here.. I've now tried it in PHP and am getting the same response... Ie, given this simple script: ?php // set no time limit (just in case) set_time_limit(0); // build a data array for ($i=0;$i10;$i++) $data .= $i.\t; // build the sql $sql = REPLACE into reflectivity.measurement_extended_data set content_typeID = 1, contentID = 1, arrayID = 1, positionID = 1, testID = 1, iteration = 1, test_locationID = 1, date_added = 1, added_by = 18, data = '.$data.'; // connect to db and execute query mysql_connect('localhost','root'); mysql_query($sql); // return an error if there is one echo mysql_errno().': '.mysql_error().\n; // close the database mysql_close(); ? The data field in the table truncates like this: | snip 12750 12751 12752 12753 12754 12755 12756 12757 12758 12759 12760 12761 12762 12763 12764 12765 12766 12767 12768 12769 12770 12771 12772 12773 1 | It always truncates in the exact same location. I can drop the table and recreate but it still truncates in the exact same location. The query, when echoed out is sane ending indeed on 9. Here's the table again: mysql desc measurement_extended_data; +-+---+--+-+-+---+ | Field | Type | Null | Key | Default | Extra | +-+---+--+-+-+---+ | content_typeID | tinyint(3) unsigned | | PRI | 0 | | | contentID | int(10) unsigned | | PRI | 0 | | | arrayID | smallint(5) unsigned | | PRI | 0 | | | positionID | mediumint(8) unsigned | | PRI | 0 | | | testID | smallint(5) unsigned | | PRI | 0 | | | iteration | smallint(5) unsigned | | PRI | 0 | | | test_locationID | smallint(5) unsigned | | MUL | 0 | | | date_added | int(10) unsigned | | | 0 | | | added_by| mediumint(8) unsigned | | | 0 | | | data| blob | | | | | +-+---+--+-+-+---+ (Note.. I've even tried changing data to a text NOT NULL).. exact same truncation location). The php script above reports no error. ** The UPDATE log (when turned on) shows the ENTIRE query (ie.. the query is at least making it to the database long enough to be logged). Again.. the my.cnf: # MySQL Configuration File # # One can use all long options that the program supports. # Run the program with --help to get a list of available options # This will be passed to all mysql clients [client] port= 3306 socket = /tmp/mysql.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # The MySQL server [mysqld] port= 3306 socket = /tmp/mysql.sock skip-locking set-variable= key_buffer=256M set-variable= max_allowed_packet=7M # INCREASED FOR MORE DATA set-variable= thread_stack=128K set-variable= back_log=50 set-variable= max_connections=2250 set-variable= tmp_table_size=15M set-variable= table_cache=6000 set-variable= sort_buffer=3584K set-variable= join_buffer=512K set-variable= connect_timeout=2 set-variable= record_buffer=1M set-variable= flush_time=900 set-variable= wait_timeout=300 [mysqldump] quick set-variable= max_allowed_packet=16M [mysql] no-auto-rehash [myisamchk] set-variable= key_buffer=32M set-variable= sort_buffer_size=48M Any new ideas? Seth Seth Northrop writes: 010624 19:44:23 Aborted connection 231 to db: 'unconnected' user: 'root' host: `localhost' (Got timeout reading communication packets) The above only implies that your program exited without calling mysql_close(). Nothing to do with your problem. But, as I told you , you did not quote string constants. And shoot out your resulting string to some file for inspection. That might help you find your error. --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Got timeout reading communication packets
Slight correction in the query... The query just looks like: REPLACE into reflectivity.measurement_extended_data set content_typeID = 7, contentID = 121529, arrayID = 0, positionID = 1, testID = 1, iteration = 0, test_locationID = 1, date_added = 993165848, added_by = 18, data = '396 601 12011801,0 191 191 191 191 190 189 188 188 188 188 188 187 191 191 191 191 192 snip/snip' -- --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Got timeout reading communication packets
The above only implies that your program exited without calling mysql_close(). Nothing to do with your problem. But.. why would the console be reporting this? Ie, I'm getting the same result (same error in the error log, same truncation) when I manually pipe the query to the mysql console. (Note that the errors are coming from localhost; the actual script is run from a client). Shouldn't a direct console query terminate the connection to the database after proper completion? But, as I told you , you did not quote string constants. And shoot out your resulting string to some file for inspection. That might help you find your error. See previous emails.. I've been doing this.. the query looks sane; no \0 characters, no odd binary data, nothing out of whack where it truncates (ie, no renegade ' etc.). Because running the query through the console with no intervention of the c api causes the same result I'm finding it difficult to believe it's an issue with the code itself unless it has something to do with the string itself (ie.. there is some unknown character in there). But, that seems unlikely since the code to generate the query is fairly straight forward.. just dump a bunch of tab delimited ints groups of which comma delimted into a string. The query just looks like: REPLACE into reflectivity.measurement_extended_data set content_typeID = 7, contentID = 121529, arrayID = 0, positionID = 1, testID = 1, iteration = 0, test_locationID = 1, date_added = 993165848, added_by = 18, data = '396 601 12011801,0 191 191 191 191 190 189 188 188 188 188 188 187 191 191 191 191 192 snip/snip; It just happens to be several MB large. Or... are we just having a huge disconnect here? Thanks! Seth - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: how to prevent inserting duplicate rows?
how do I OVERWRITE the previous entry in the table? ie. is there a SQL command to do like INSERT, but if duplicate found, overwrite with the new value. See REPLACE into tablename http://www.mysql.com/doc/R/E/REPLACE.html Take care, seth - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Got timeout reading communication packets
Hi! I posted this last week, but, haven't seen any replies yet, and, generally, there seems to be a lack of replies in the archives on the Got timeout reading communication packets related issues. Here is the issue again: We have started doing some rather large replace intos and inserts (~3MB) using a client written in C. However, these queries are truncating at around 100k - meaning, it inserts about 100k or so of the blob field.. but, stops and truncates out the rest of the query. There is no error returend by the connection, but, the error log does have errors such as these: 010323 11:15:19 Aborted connection 45 to db: 'unconnected' user: 'root' host: `localhost' (Got timeout reading communication packets) The same occurs if I pipe the query directly to mysql from the command line. No error, but the data field only has about 100k of the data there before it just stops. The table looks like this: # MySQL dump 8.13 # # Host: localhostDatabase: reflectivity # # Server version3.23.35 # # Table structure for table 'measurement_extended_data' # CREATE TABLE measurement_extended_data ( content_typeID tinyint(3) unsigned NOT NULL default '0', contentID int(10) unsigned NOT NULL default '0', arrayID smallint(5) unsigned NOT NULL default '0', positionID mediumint(8) unsigned NOT NULL default '0', testID smallint(5) unsigned NOT NULL default '0', iteration smallint(5) unsigned NOT NULL default '0', test_locationID smallint(5) unsigned NOT NULL default '0', date_added int(10) unsigned NOT NULL default '0', added_by mediumint(8) unsigned NOT NULL default '0', data blob NOT NULL, PRIMARY KEY (content_typeID,contentID,arrayID,positionID,testID,iteration), KEY test_locationID (test_locationID), KEY testID (testID) ) TYPE=MyISAM; Smaller queries work fine. It's just when they seem to be over 1 or 2MB. /etc/my.cnf looks like this: # MySQL Configuration File # # One can use all long options that the program supports. # Run the program with --help to get a list of available options # This will be passed to all mysql clients [client] port= 3306 socket = /tmp/mysql.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # The MySQL server [mysqld] port= 3306 socket = /tmp/mysql.sock skip-locking set-variable= key_buffer=256M set-variable= max_allowed_packet=7096128 set-variable= thread_stack=128K set-variable= back_log=50 set-variable= max_connections=512 set-variable= tmp_table_size=15M set-variable= table_cache=6000 set-variable= sort_buffer=3584K set-variable= join_buffer=512K set-variable= connect_timeout=2 set-variable= record_buffer=1M set-variable= flush_time=900 set-variable= wait_timeout=300 [mysqldump] quick set-variable= max_allowed_packet=16M [mysql] no-auto-rehash [myisamchk] set-variable= key_buffer=32M set-variable= sort_buffer_size=48M As you can see we bumped up the max_allowed_packet because we were getting server has gone away errors. - Box is a 1Ghz XEON /w Redhat 7.2 and 1GB RAM - MySQL version is: 3.23.35 - Query is a fairly straight forward insert or replace into and is verified as clean. Am I missing a configuration directive? Is something else up? Thanks! Seth - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Got timeout reading communication packets
You are possibly not escaping binary fields. You should use mysql_escape_string() function on binary objects prior to inserting it in the INSERT command, or use load_file() function. Thanks Sinisa for the reply! There really isn't any binary data.. and, we are calling mysql_escape_string.. here is the snippet of code: // START SNIP int state; unsigned int encoded_str_length = 0; MYSQL_RES *result; MYSQL_ROW row; char *sql, *escaped_data; escaped_data = new char[2*strlen(data)+1]; //required by mysql sql = new char[2*strlen(data)+600];// +1 encoded_str_length = mysql_escape_string(escaped_data,data,strlen(data)); sprintf(sql,%s%d%s%d%s%d%s%d%s%d%s%d%s%d%s%d%s%d%s%s%s, REPLACE into reflectivity.measurement_extended_data set content_typeID = ,content_typeID, , contentID = ,contentID, , arrayID = ,arrayID, , positionID = ,positionID, , testID = ,testID, , iteration = ,iteration, , test_locationID = ,test_locationID, , date_added = ,date_added, , added_by = ,added_by, , data = ',escaped_data,'); state = mysql_query(connection,sql); delete [ ] sql; delete [ ] escaped_data; if (state != 0) { Application-MessageBox(mysql_error(connection),NULL,MB_OK); return 0; } // END SNIP And, as I mentioned, if I do something like: bin/mysql -u -p database foo.sql where foo.sql contains the query checked for sanity the exact same result occurs. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Got timeout reading communication packets
At a small glance, your code looks OK. Try looking at resulting string sql in gdb or just print it out to stdout. That might catch your bug. Thanks again for the reply. The query looks sane.. As I mentioned, I spit it out to a file and piped it directly to mysql (after inspecting it). At least where it truncates there isn't anything abnormal (the blob is merely a tab delimited string of ints) and the same truncation occurs. MySQL commands lenghts are limited only by max_allowed_packet, which can be extended up to 16 Mb in 3.23 and 4 Gb in 4.0. Nodz.. that's what I thought. This is our setting: set-variable= max_allowed_packet=7096128 Confirmed in variables: | max_allowed_packet | 7095296 It's definately not that big, and, we aren't getting the standard expected errors when you exceed packet size (such as mysql server has gone away). The only clue we have is this: 010624 19:44:23 Aborted connection 231 to db: 'unconnected' user: 'root' host: `localhost' (Got timeout reading communication packets) - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Help me HOw to load Images or pictures into MYSQL database
Please inform me how to load IMAGES INTO MYSQL TABLES. Having created several similar applications in the past I would recommend NOT storing these images IN the database; your filesystem makes for a nice blob storage device. I would instead store pointers of some sort to the files to minimize DB I/O and storage requirements. But, if you must, just write the data to a blob field. Take care, seth --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Is there any file .frm, .MYD, .MID on mySQL in HPUX Plateform?
Yes. Though having never installed mysql on HP-UX my presumption is that it acts much like all other unices and stores them in a ./data directory. For example, in the binary distribution this is typically usually in something like /usr/local/mysql/data. (or, unpacked directory/data) Take care, seth --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Startup Question
in this directory is the ./mustang-bin.index and also ./mysql/most.frm however I am getting the error messages below when I try and start it. I have never had this problem before. Does anyone have any ideas. Your mysqld doesn't know where your files are (though, that was probably obvious ;). Try setting the -h (--datadir) flag to force to that location and see if that works. If that doesn't you have a permissions issue I would presume. As for the log, check the path I suppose in /etc/my.cnf and verify that it is correct - else, again it could be a permissions issue. Personally though, I would install the binary distro on linux. I believe, (correct me if I'm wrong others), it at least use to be noticably faster than trying to compile it yourself absent given you couldn't match Monty's compiler configurations) - not to mention it's a whole lot less of a pain to manage then the source distro. Just untar and go. --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: passing variables to/from flash to/from php
Does anyone know how to pass a variable from a page embedded with a = .swf (flash) object to a .php file for processing, and pass back the = results to the same page containing the .swf file? Thanx in advance! Perhaps not the most topical discussion for the mysql list, but, alas, I would recommend checking out the Ming swf functions in PHP. http://www.php.net/manual/en/ref.ming.php Take care, seth --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: help with index
whats the difference between: alter table tablename add index (name); alter table tablename add index (age); alter table tablename add index (birthdate); alter table tablename add index (city); These are four INDIVIDUAL keys.. helpful if you want to search by name, age, birthday, _OR_ city compared to: alter table tablename add index (name,age,birthdate,city); This is a compound key. Helpful usually if you want to span a UNIQUE entity across multiple columns. Ie, if you wanted to have a unique key (say a primary key) without having a distinct unique column (like a counter). The key would be creating essentially is nameagebirthdatecity. Since you can query on the prefix of a key, a query which would query: (name) OR (name age) OR, (name age birthdate) OR, (name age birthdate city) would be optimized. HOWEVER, a query on age, birthday, or city alone (or any combination which doesn't follow the prefix order of the key) would NOT. What I want is performance/speed in doing the following: select * from tablename where name = 'bill'; select * from tablename where age 30; select * from tablename where city = 'new york'; You would thus want the prior, individual keys. Take care, seth --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: change table name?
how do i change a table's name without recreating it? See: http://www.mysql.com/doc/A/L/ALTER_TABLE.html ALTER TABLE tbl_name RENAME TO new_tbl_name Take care, seth --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Repairing Tables
It there a way around this error occuring after every outage?. Eeek! Are you running myisamchk -r? - obviously, there isn't really much of a way to avoid having to repair tables after a hard OS crash. I would highly recommend trying to get ahold of something like an APC battery supply which you can monitor on the serial port - then you can safely shut down mysql and shut down the server before you run out of battery power. This of course assumes you can get a hold of a good quality UPS there at a reasonable price. --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: help with index
Just make sure you have a key on any field you do a query on. Given that there isn't a whole lot of correlation between your columns (logically speaking.. ie, age doesn't really match up with city) I wouldn't worry about compound keys.. they'll actually just slow you down since you need to maintain larger keys that don't offer much advantage. The only exception to this might be if you commonly have a query which takes the form you can represent with a compound key (ie.. you always grab a row by age and city).. but, even then I don't think you will gain much over two individual keys. Take care, seth On Fri, 22 Jun 2001, Jaime Teng wrote: whats the difference between: alter table tablename add index (name); alter table tablename add index (age); alter table tablename add index (birthdate); alter table tablename add index (city); These are four INDIVIDUAL keys.. helpful if you want to search by name, age, birthday, _OR_ city Will I get any performance if I were to do some complex query like: select * from tablename where name = 'BILL' and city = 'new york'; select * from tablename where age 30 and city = 'seatle'; select * from tablename where age 30 and name = 'JOHN'; or do you suggest that I add the following on top of the previous index? alter table tablename add index (name,city); alter table tablename add index (age,city); alter table tablename add index (age,name); etc... thanks Jaime database,mysql --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Database MySQL: myisamchk and isamchk error message
I was running myisamchk and ismchk on my databases and I got the following error: warning: 1 clients is using or hasn't closed the table properly You aren't running (my)isamchk with mysqld running are you? If so, you shouldn't be.. or at least running on tables that are being used/open and not flushed and locked. --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Knowing if a table exits
desc tablename does the trick from the console or via a query. Checking for the datafiles is another way if you are local to the server; though, that's not really a sane technique since you have to usually have the permissions of the database to see the datafiles.. privs you probably don't your script to have. --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: what does this mean please?
ERROR 2002: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) and how you fix it? It means you have no local socket for mysql to connect through. Meaning, something/someone either deleted or mangled /tmp/mysql.sock or mysqld isn't running to begin with. Try to connect with the -host flag to connect (so it won't use the socket) and shutdown and try to restart the mysqld - hopefully this will recreate mysql.sock Take care, seth --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: RAID advice : (fwd)
Sorry for the delayed reply.. the list marked my reply as spam ! ;) -- Forwarded message -- Date: Fri, 22 Jun 2001 02:48:14 -0700 (PDT) From: Seth Northrop [EMAIL PROTECTED] To: Wouter de Jong [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: RAID advice : Let's say your OS crashes (Linux...bad libraries for example, that are not resolvable, for example :), then you'll have to format all your disks including your \ data to replace the OS. I'm missing the link here between OS crashing and having to reinitalize and rebuild the RAID array. Ultimately, you want to avoid single points of failure. Having the OS on a none redundant disk seems like a pretty big one. If that disk goes bad (a much higher probability than linux crashing and destroying your disks in a flaming explosion) then your database is down. If it's on the RAID array then you swap a new disk in and have zero downtime (assuming you can hot swap). You could certainly keep your / partition seperate.. this is generally a good idea anyways; but, I see no advantage to keeping the OS off the RAID array. --- Seth Northrop Manager of Information Technology Reflectivity, Inc. 3910 Freedom Circle, Suite 103 Santa Clara, CA 95054 voice: 408-970-8881 x147 fax:408-970-8840 http://www.reflectivity.com/ - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Large Replace/Inserts Truncating.
Hi! We have started doing some rather large replace intos and inserts (~3MB) using a client written in C. However, these queries are truncating at around 100k. There is no error returend by the connection, but, the error log does have errors such as these: 010323 11:15:19 Aborted connection 45 to db: 'unconnected' user: 'root' host: `localhost' (Got timeout reading communication packets) The same occurs if I pipe the query directly to mysql from the command line. No error, but the data field only has about 100k of the data there before it just stops. The table looks like this: # MySQL dump 8.13 # # Host: localhostDatabase: reflectivity # # Server version3.23.35 # # Table structure for table 'measurement_extended_data' # CREATE TABLE measurement_extended_data ( content_typeID tinyint(3) unsigned NOT NULL default '0', contentID int(10) unsigned NOT NULL default '0', arrayID smallint(5) unsigned NOT NULL default '0', positionID mediumint(8) unsigned NOT NULL default '0', testID smallint(5) unsigned NOT NULL default '0', iteration smallint(5) unsigned NOT NULL default '0', test_locationID smallint(5) unsigned NOT NULL default '0', date_added int(10) unsigned NOT NULL default '0', added_by mediumint(8) unsigned NOT NULL default '0', data blob NOT NULL, PRIMARY KEY (content_typeID,contentID,arrayID,positionID,testID,iteration), KEY test_locationID (test_locationID), KEY testID (testID) ) TYPE=MyISAM; Smaller queries work fine. It's just when they seem to be over 1 or 2MB. /etc/my.cnf looks like this: # MySQL Configuration File # # One can use all long options that the program supports. # Run the program with --help to get a list of available options # This will be passed to all mysql clients [client] port= 3306 socket = /tmp/mysql.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # The MySQL server [mysqld] port= 3306 socket = /tmp/mysql.sock skip-locking set-variable= key_buffer=256M set-variable= max_allowed_packet=7096128 set-variable= thread_stack=128K set-variable= back_log=50 set-variable= max_connections=512 set-variable= tmp_table_size=15M set-variable= table_cache=6000 set-variable= sort_buffer=3584K set-variable= join_buffer=512K set-variable= connect_timeout=2 set-variable= record_buffer=1M set-variable= flush_time=900 set-variable= wait_timeout=300 [mysqldump] quick set-variable= max_allowed_packet=16M [mysql] no-auto-rehash [myisamchk] set-variable= key_buffer=32M set-variable= sort_buffer_size=48M As you can see we bumped up the max_allowed_packet because we were getting server has gone away errors. - Box is a 1Ghz XEON /w Redhat 7.2 and 1GB RAM - MySQL version is: 3.23.35 - Query is a fairly straight forward insert or replace into and is verified as clean. Am I missing a configuration directive? Is something else up? Thanks! Seth - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Limiting connections per database
Just wondering if there is a way to limit the number of connections to a particular database without starting up another server to run that database specifically. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Borland C++ Builder (reading data).
Caveat: I'm not a seasoned Borland C++ builder programmer so this may very well be just plain stupidity on my part (I'm not even a really good C programmer either !). However, givem some code that looks similar to this: int state; MYSQL_RES *result; MYSQL_ROW row; char sql[250]; sprintf(sql,%s%s%s%s,SELECT positionID FROM reflectivity.positions where position_row = ,position_row, position_column = ,position_column); state = mysql_query(connection,sql); if (state != 0) { printf(mysql_error(connection)); return 0; } result = mysql_store_result(connection); // while ((row = mysql_fetch_row(result)) != NULL) row = mysql_fetch_row(result); if (mysql_num_rows(result) 0) { // free some memory mysql_free_result(result); return (int)atoi(row[0]); } else { // free some memory mysql_free_result(result); return 0; } in C using gcc on linux it works. I return a valid int which corresponds to the database entry, no warnings, no errors that I can see. In Borland C++ Builder, although I connect to the database (ie, I can WRITE fine to the databsae, and, there are no complaints by my connection to the database), and if (mysql_num_rows(result) 0) returns true (meaing it sees the row(s)), I get: raiased EXCEPTION CLASS EAccess Violation with message Access Violation at address 3256EEFF in module cc3250mt.dll read of address FEEFEE. Process stopped. row[0] however does NOT match against NULL or \0 (though if I print it out it shows ) When I attempt to reference or work with row[0]. This has been the case whenever I try to read an element in row under Borland (various other functions as well) whereas the exact code (in all instances) works fine when compiled with gcc. Again though, any WRITES to the database work great under Borland (so I don't think it's a connection issue). Am I just totally missing some boat here? Thanks for any help! Seth - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: DBD and locking
What if the queries were updates instead of inserts? Would I need or be able to lock the tables then? Can I go without locking the bdb tables and not worry about the data integrity on those tables during updates and inserts? -Seth Hall Student Programmer Ohio State University Main Library On 24 May 2001 16:54:42 -0700, Jeremy Zawodny wrote: On Thu, May 17, 2001 at 12:35:09PM -0400, Seth Hall wrote: Hi, could someone point me to a tutorial on when to do table locking with the BDB tables in MySQL? Why? I'm running version 3.23.37 on RH7.0 (with BDB tables :) and if I attempt to do locking within a transaction, the transaction is automatically committed leaving me with non-working transaction code. So, what I'm wondering is do I even need to lock the tables(I'm doing inserts and updates)? If I do, have there been any bugs with locking inside transactions? BTW, if I don't do the table write locking the code works fine. this works begin work; insert into Table (field1, field2) VALUES (2,'hi'); rollback; this doesn'tit's committed anyway begin work; lock tables Table write; insert into Table (field1, field2) VALUES (2,'hi'); unlock tables; rollback; Why is the lock necessary at all? Jeremy -- Jeremy D. Zawodny, [EMAIL PROTECTED] Technical Yahoo - Yahoo Finance Desk: (408) 349-7878Fax: (408) 349-5454Cell: (408) 439-9951 MySQL 3.23.29: up 5 days, processed 35,049,645 queries (79/sec. avg) - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: DBD and locking
Is locking not necessary on BDB tables? Does transaction support get rid of the need for table locking? thanks, -Seth On Thu, May 17, 2001 at 12:35:09PM -0400, Seth Hall wrote: Hi, could someone point me to a tutorial on when to do table locking with the BDB tables in MySQL? Why? I'm running version 3.23.37 on RH7.0 (with BDB tables :) and if I attempt to do locking within a transaction, the transaction is automatically committed leaving me with non-working transaction code. So, what I'm wondering is do I even need to lock the tables(I'm doing inserts and updates)? If I do, have there been any bugs with locking inside transactions? BTW, if I don't do the table write locking the code works fine. this works begin work; insert into Table (field1, field2) VALUES (2,'hi'); rollback; this doesn'tit's committed anyway begin work; lock tables Table write; insert into Table (field1, field2) VALUES (2,'hi'); unlock tables; rollback; Why is the lock necessary at all? Jeremy -- Jeremy D. Zawodny, [EMAIL PROTECTED] Technical Yahoo - Yahoo Finance Desk: (408) 349-7878Fax: (408) 349-5454Cell: (408) 439-9951 MySQL 3.23.29: up 5 days, processed 35,049,645 queries (79/sec. avg) - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
C API; queries within functions.
Hi! This is probably more of a C inadequacy than a MySQL problem. I'm playing around with C with MySQL (I have to date only interfaced with it in PHP) and I was curious if anyone had any example code which illustrates how you would write functions which return mysql data. For example, in PHP I might write a simple function like this: function get_specific_db_column($dbname, $tblname, $pk, $pk_val, $colname) { $sql = SELECT $colname as val FROM $dbname.$tblname WHERE $pk = '$pk_val'; $r = mysql_query($sql); while ($row = mysql_fetch_array($r)) $array[] = $row; return $array; } But, in C I haven't figured out how to do things like navigate passing the MYSQL *mysql init pointer into the function for mysql_query etc. I can however get a program WITHOUT functions (ie, hardcode the query) compiled and working: // Simple test application // for the C Mysql API #include sys/time.h #include stdio.h #include mysql.h #define def_host_name NULL #define def_user_name root #define def_password NULL #define def_db_name reflectivity int main (char **arg) { MYSQL_RES *result; MYSQL_ROW row; MYSQL *connection, mysql; int state; char colsize=10, rowsize=10; // connect to the mysql database on internal mysql_init(mysql); connection = mysql_real_connect(mysql, def_host_name, def_user_name, def_password, def_db_name, 0, /*port defaut*/ NULL, /*socket default*/ 0);/*flag*/ if (connection == NULL) // check for a connection error { // print the error message printf(mysql_error(mysql)); return 1; } state = mysql_query(connection,SELECT * from reflectivity.accounts); if (state != 0) { printf(mysql_error(connection)); return 1; } // you must call mysql_store_result before we can issue anything else result = mysql_store_result(connection); printf(Rows: %d\n, mysql_num_rows(result)); // process each row in the result set while ((row = mysql_fetch_row(result)) != NULL) printf(%s - %s - %s - %s - %s\n,row[0],row[1],row[2],row[3],row[4]); // free some memory mysql_free_result(result); // close the mysql connection mysql_close(connection); printf(Done.\n); } Ny guess is that if I just see a couple of examples which mimic some of the functionality of the above PHP function in C I'll be able to write them without a problem. Any pointers on where to look? Thanks! Seth - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
DBD and locking
Hi, could someone point me to a tutorial on when to do table locking with the BDB tables in MySQL? I'm running version 3.23.37 on RH7.0 (with BDB tables :) and if I attempt to do locking within a transaction, the transaction is automatically committed leaving me with non-working transaction code. So, what I'm wondering is do I even need to lock the tables(I'm doing inserts and updates)? If I do, have there been any bugs with locking inside transactions? BTW, if I don't do the table write locking the code works fine. this works begin work; insert into Table (field1, field2) VALUES (2,'hi'); rollback; this doesn'tit's committed anyway begin work; lock tables Table write; insert into Table (field1, field2) VALUES (2,'hi'); unlock tables; rollback; Thanks, Seth Hall Student Programmer Ohio State University Main Library - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php