RE: Blob data
Rick is dead on correct, I call I chunking blob data.. There is an article here on a simple implementation: http://www.dreamwerx.net/phpforum/?id=1 I've had hundreds of thousands of files in this type of storage before with no issues. On Tue, 3 Jul 2007, Rick James wrote: I gave up on putting large blobs in Mysql -- too many limits around 16MB. Instead I broke blobs into pieces, inserting them with a sequence number. Added benefit: Does not clog up replication while huge single-insert is being copied over network and reexecuted on slaves. -Original Message- From: Paul McCullagh [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 27, 2007 2:57 AM To: Ann W. Harrison Cc: MySQL List; MySQL Internal Subject: Re: Blob data Hi Ann, Currently, the thoughts on how to make the BLOB references secure go like this: The BLOB reference consists of 2 components: The first component is basically an index used to find the BLOB on the server. The second component is a random number generated when the BLOB is created. The random number acts as an authorization code, and is checked when the BLOB is requested. So if the authorization code supplied in the BLOB reference does not match the code stored by the server for that BLOB, then the BLOB is not returned. If the authorization code is a 4-byte number, then the chances of getting the correct code for any particular BLOB is 1 in 4 billion. This makes it practically impossible to discover a BLOB by generating BLOB references and requesting them from the server. However, it does mean that once you have a valid BLOB reference it remains valid until the BLOB is deleted. So you can pass it around to your friends, or post it on the internet if you like. In order to prevent this (it will depend on the site, as to whether this is required), it would be possible to add a dynamic component to the BLOB reference which has a certain lifetime (for example, it expires after a certain amount of time, or when a database session is closed). Such a component would have to be added to the BLOB reference URL by the storage engine on the fly. So, as the SELECT result is being generated, the dynamic component is added to the BLOB references returned in the rowset. Security of the BLOB streaming stuff is one of the major issues, so further comments, questions and ideas are welcome! Best regards, Paul On Jun 26, 2007, at 4:36 PM, Ann W. Harrison wrote: Paul McCullagh wrote: It will also be possible to store the BLOBs out-of-row. In this case, only a BLOB reference is stored in the row. The reference is basically a URL which can be used to retrieve the data. So when you do an SQL SELECT which includes a BLOB column, the resulting rowset does not contain the data, just the BLOB reference (URL). How does this work with access privileges? Can you just send random numbers in the URL until you start seeing blob data? Best regards, Ann -- MySQL Internals Mailing List For list archives: http://lists.mysql.com/internals To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Blob data
Interesting, never tried compressing the data, sounds like that might be a nice addon.. Do you have any performance numbers you can share? I posted some performance numbers on one of my implementations some time ago. I found the thread here: http://lists.mysql.com/mysql/206337 On Tue, 3 Jul 2007, Rick James wrote: And while you are at it, you may as well compress the chunks. You machine probably can compress/uncompress faster than it can write/read disk. I use Perl's Zlib::Compress or PHP's equivalent instead of Mysql's function for 2 reasons: * The network traffic is compressed. * Mysql puts an unnecessary extra byte on end of the string (ok, this is totally insignificant) And definitely compress each chunk separately. It seems that those library routines slow down (excessive memory realloc??) after about 50K. That is, you can probably compress 20 50K chunks faster than 1 1M chunk. My implementation did File - Database -- the huge blob was never instantiated completely in RAM, only one chunk at a time. (Imagine trying to store a 50GB file.) -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 03, 2007 2:08 PM To: Rick James Cc: 'Paul McCullagh'; 'Ann W. Harrison'; 'MySQL List' Subject: RE: Blob data Rick is dead on correct, I call I chunking blob data.. There is an article here on a simple implementation: http://www.dreamwerx.net/phpforum/?id=1 I've had hundreds of thousands of files in this type of storage before with no issues. On Tue, 3 Jul 2007, Rick James wrote: I gave up on putting large blobs in Mysql -- too many limits around 16MB. Instead I broke blobs into pieces, inserting them with a sequence number. Added benefit: Does not clog up replication while huge single-insert is being copied over network and reexecuted on slaves. -Original Message- From: Paul McCullagh [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 27, 2007 2:57 AM To: Ann W. Harrison Cc: MySQL List; MySQL Internal Subject: Re: Blob data Hi Ann, Currently, the thoughts on how to make the BLOB references secure go like this: The BLOB reference consists of 2 components: The first component is basically an index used to find the BLOB on the server. The second component is a random number generated when the BLOB is created. The random number acts as an authorization code, and is checked when the BLOB is requested. So if the authorization code supplied in the BLOB reference does not match the code stored by the server for that BLOB, then the BLOB is not returned. If the authorization code is a 4-byte number, then the chances of getting the correct code for any particular BLOB is 1 in 4 billion. This makes it practically impossible to discover a BLOB by generating BLOB references and requesting them from the server. However, it does mean that once you have a valid BLOB reference it remains valid until the BLOB is deleted. So you can pass it around to your friends, or post it on the internet if you like. In order to prevent this (it will depend on the site, as to whether this is required), it would be possible to add a dynamic component to the BLOB reference which has a certain lifetime (for example, it expires after a certain amount of time, or when a database session is closed). Such a component would have to be added to the BLOB reference URL by the storage engine on the fly. So, as the SELECT result is being generated, the dynamic component is added to the BLOB references returned in the rowset. Security of the BLOB streaming stuff is one of the major issues, so further comments, questions and ideas are welcome! Best regards, Paul On Jun 26, 2007, at 4:36 PM, Ann W. Harrison wrote: Paul McCullagh wrote: It will also be possible to store the BLOBs out-of-row. In this case, only a BLOB reference is stored in the row. The reference is basically a URL which can be used to retrieve the data. So when you do an SQL SELECT which includes a BLOB column, the resulting rowset does not contain the data, just the BLOB reference (URL). How does this work with access privileges? Can you just send random numbers in the URL until you start seeing blob data? Best regards, Ann -- MySQL Internals Mailing List For list archives: http://lists.mysql.com/internals To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To
Re: advice for blob tables?
I would love to see an implementation with 1 row for large data that works well. The main issues I had were that mysql has a default max packet size limit (I think it used to be like 16MB (mysql 3.23) - 1GB Mysql 4 - Not sure v5. Alot of people don't have control over those settings in their environment. If you try to do a single insert/query larger than that, the query would fail. Also it was causing the webserver to buffer a large amount of data. Here's some numbers to tickle your fancy.. I downlaoded a file from the ftp gateway to a storage implementation I've done (production). FTP CLIENT -- NETWORK FRONTEND VLAN (100FDX) -- FTP SERVER (JAVA) FRONTEND -- NETWORK BACKEND VLAN (100FDX) -- MYSQL STORAGE NODE I grabbed an approximately 230MB file from a different box (on the same frontend vlan) in 40 seconds, at a speed of about 5800K/sec .. Also I logged into the storage node to see how mysql was doing: 5179 mysql 15 0 12528 4644 2108 S10.8 0.5 0:04 mysqld It was bouncing between 9% and 19% during the transfer and the machine is a 2 X P3/933. There's very little chance a real internet user can download at that type of speed, so a single mysql storage node can handle a large amount of concurrent users/activity. [EMAIL PROTECTED]:~$ ftp ftp.dbfs Connected to ftp.dbfs 220 DBFS. Name (ftp.dbfs): XX 331 User name okay, need password for XX Password: 230 User logged in, proceed Remote system type is UNIX. ftp bin 200 Command TYPE okay ftp cd XX/filez/apps 200 Directory changed to /XXx/filez/apps ftp get Visio2K3.exe local: Visio2K3.exe remote: Visio2K3.exe 200 Command PORT okay 150 File status okay; about to open data connection 226 Closing data connection 235075910 bytes received in 39.9 secs (5.8e+03 Kbytes/sec) ftp Rules for testing: - it must be a double jump (as in real world implemenation) - you must push the data between the webserver and mysql server over a tcp/ip network - you must download the data from the webserver from another box via tcp/ip (no loopback/local socket connections - does not clearly represent a real scalable solution) Looking forward to some numbers. On Wed, 25 Apr 2007, Kevin Waterson wrote: This one time, at band camp, [EMAIL PROTECTED] wrote: I don't feel the implementation direction this article takes is good. It uses single row binary storage, which anyone who has had to deal with large files knows is a definate issue. According to your method of storing binary data in BLOB fields, rather than LONGBLOBs you would have over 11,000 rows for a single 700MB iso. I am not sure how MySQL handles blobs internally but I would assume it is like most other databases and adds some sort of delimiter to the data and has to seek to it to distinguish columns. This means a 700Mb file stored in 64k chunks would need over 11,000 file seeks per row read. LONGBLOBs are the way forward here I feel. A single table containing the relevant metadata along with the binary data. There is no need for any normalization as this is a one-to-one relationship. having over 11,000 rows for a single file I dunno, I would like to see some benchmarks on this. Kevin -- Democracy is two wolves and a lamb voting on what to have for lunch. Liberty is a well-armed lamb contesting the vote. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: advice for blob tables?
I don't feel the implementation direction this article takes is good. It uses single row binary storage, which anyone who has had to deal with large files knows is a definate issue. On Sat, 21 Apr 2007, Kevin Waterson wrote: This one time, at band camp, Michael Higgins [EMAIL PROTECTED] wrote: So before I just jump in and go like I usually do, does anyone have any quick advice, things to consider, links, must-RTFMs or the like to help a newbie out? This tutorial deals with images and BLOBs. It should get you on the right path. http://www.phpro.org/tutorials/Storing-Images-in-MySQL-with-PHP.html Kevin -- Democracy is two wolves and a lamb voting on what to have for lunch. Liberty is a well-armed lamb contesting the vote. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: advice for blob tables?
Here's a good php implementation, you can implement the concept in any language you like: http://www.dreamwerx.net/phpforum/?id=1 On Fri, 20 Apr 2007, Michael Higgins wrote: Hello, all -- I want to set up a database for document storage. I've never worked with binary files stored in tables. So before I just jump in and go like I usually do, does anyone have any quick advice, things to consider, links, must-RTFMs or the like to help a newbie out? I feel like I need to become a bit more expert in database design but I simply don't really know where to start. Like, what 'engine' should I use in this case? I haven't a clue Thanks! -- Michael Higgins -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: binary into blob
Here's a great article on how to store pdf/whatever binary as blob chunks: http://www.dreamwerx.net/phpforum/?id=1 On Wed, 7 Mar 2007, Jay Pipes wrote: Ed wrote: Hi All, I'm trying to figure out how to put a pdf file into a blob field. I guess a pdf file is a binnary file and it will contain characters that will mess things up, so my question is: can it be done? Or better, how can it be done? ;) Any pointers to documentation are a bonus! Is there a specific reason you want to store this in a database? Why not use the local (or networked) file system and simply store the metadata about the PDF in the database? Cheers, Jay -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: binary into blob
I have to disagree with most, I would store the entire file in the database, metadata and all. Better security, if you have a backend database, it's much harder to get the data than pdf's sitting in a directory on the webserver. Plus if you ever want to scale to a multi-webserver environment, the db storage works great. On Wed, 7 Mar 2007, Ed wrote: On Wednesday 07 March 2007 21:26, Alexander Lind wrote: I would put the pdf as a regular file on the hd, and store the path to it in the db. Meta data could be things like the size of the pdf, author, owner, number of pages etc. Storing binary data from pdf:s or images or any other common binary format is generally not a good idea. Alec Thank you all for your help. I get the gist of things... don't store the binnary, store the path to it and details of it. Thank you all for your quick response. -Ed -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: binary into blob
I've built systems than stream tons of data via this method, at times into some impressive requests per second. Also I've exposed files stored in this manner via a ftp interface with servers able to deliver near wire speed data in and out of the db storage. When your into a load balanced environment, you have a private network/vlan between the webservers backend and the database servers. 100mbit link/segment dedicated to db traffic. No bottle neck there. You are right about streaming the same file again and again, but is this really a real world scenario in all apps. If your app leans in this direction, you can do some temporary caching on the webserver end (maybe in memory cache, or if your running a java container, just throw the objects into memory for a timespan, before deleting and re-fetching. If someone gains access to a webserver, they can walk around and steal all source code/files on the webserver (including your pdf's) .. If you were running compiled java code, or encoded php code (with connection strings in php) that would be alot more difficult for them to access the data. I had a very large filesystem storage project, which the file management/scalability/expandability just got out of hand. Migrating to db storage solved that totally and I've never looked back. On Wed, 7 Mar 2007, Alexander Lind wrote: imagine a large system where pdf-files are accessed by clients a lot. say 1 pdf file is access per second on average. also say that your database is on a machine separate from the webserver(s) (as is common). do you really think its a good idea to pump the pdf data from the db each time it needs to be accessed? it may seem easier to do this as you then would not have to replicate the physical files across the webserver machines, but i would not say that the slight increase of ease is worth the added load to the db and the network. following your reasoning, you'd also store all images in the db so that you don't have to replicate these either? what is unsafe about having a pdf file on a webserver (not in a place where the httpd can access it of course)? alec [EMAIL PROTECTED] wrote: I have to disagree with most, I would store the entire file in the database, metadata and all. Better security, if you have a backend database, it's much harder to get the data than pdf's sitting in a directory on the webserver. Plus if you ever want to scale to a multi-webserver environment, the db storage works great. On Wed, 7 Mar 2007, Ed wrote: On Wednesday 07 March 2007 21:26, Alexander Lind wrote: I would put the pdf as a regular file on the hd, and store the path to it in the db. Meta data could be things like the size of the pdf, author, owner, number of pages etc. Storing binary data from pdf:s or images or any other common binary format is generally not a good idea. Alec Thank you all for your help. I get the gist of things... don't store the binnary, store the path to it and details of it. Thank you all for your quick response. -Ed -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: max_allowed_packet in my.ini
Don't store binary data in large blobs - You should instead chunk your data for better performance and no packet limitation issues. Good implementation article at: http://www.dreamwerx.net/phpforum/?id=1 On Mon, 5 Feb 2007, abhishek jain wrote: On 2/3/07, abhishek jain [EMAIL PROTECTED] wrote: Hi friends, I am using mysql 5.0.23-nt on windows, i have to store large binary data in database, i have used setting like max_allowed_packet=16M in mysqld section of my.ini . Now i have a poblem that i want to create a setup so that the entry gets added itself into the my,ini , reason being i need to create many setups on diff. machine and i do not want to search the mysqld section and write the line and restart mysql . I can however would like to do via mysqladmin or so if it is possible as i can invoke that via mine .net program Would appreciate your comments on this, Thanks . Abhishek jain Hi Friends. Pl. reply, Thanks, Abhishek jain -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: trouble starting mysqld daemon
Check for .err text log files .. they are probably in /opt/mysql/mysql/data/ called servername.err servername is the hostname of your box. On Thu, 28 Dec 2006, Jeff Jones wrote: Hi! I'm a rookie, so bear with me... Keep getting: Starting mysqld daemon with databases from /opt/mysql/mysql/data Stopping server from pid file /opt/mysql/mysql/data/unknown.pid 061228 17:16:04 mysqld ended After this command bin/safe_mysqld --user=mysql Very open to suggestions Thanks _ Find sales, coupons, and free shipping, all in one place! ?MSN Shopping Sales Deals http://shopping.msn.com/content/shp/?ctid=198,ptnrid=176,ptnrdata=200639 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysql on a NAS
Sounds more like it's setup on a SAN.. a NAS is a different type of unit like a NetApp filer. I'd have to agree with the other poster, I'm not sure your current config is valid. A more typical setup would be that both boxes should have their own unique SAN partitions, and a high speed network connection between them. Setup some replication and your good to go. On Tue, 28 Nov 2006 [EMAIL PROTECTED] wrote: Hello, We recently moved to a new cluster plattform, setup by one external IT company at present (early stage): 2 XEON computers with a fibre channel link to a Network Storage. The mysql directories are located on the Network storage and mounted into /var/lib/mysql on each machine. So, every machine running an own mysql server, but sharing the directory. The used file system is OCFS (Oracle Cluster File System) When I played with the two servers yesterday (we are not really happy with the performance) I noticed, that the stats of each mysql server are the same. Even if I restart one server, the stats are not amended in any way - if I restart both servers the stats were reset. Now my questions: 1) is this installtion a common practise to share a mysql db with several servers (we might increase the number of servers) 2) is there a better, more performant way, to install mysql in a cluster ? 3) Just your general idea about this setup. Thanks Stefan Stonki Onken -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Contents storing in mysql
If your storing files in mysql, it's best to chunk/shard your data if your not doing so already. Example article/code at: http://www.dreamwerx.net/phpforum/?id=1 On Thu, 16 Nov 2006, Shain Lee wrote: Hi , I wanted to store images , musics, videos ..etc in mysql database.storing contents as content type BLOB , i don't have any issues with that setup. But , now i need to store that contents in folders and should be kept the contents path with referrence ofcontents file name . for eample , if i have a image called sky.jpg . and that image stored in the /usr/local/wallpapers/sky.jpg. In my database , i have defined fields like , filename , filesize, imagepath, contenttype. How can i do that ? can you give me any help ful links ? Thanx in advance, Shaine. - All New Yahoo! Mail ? Tired of [EMAIL PROTECTED]@! come-ons? Let our SpamGuard protect you. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: files stored in fields
Sure.. checkout this article: http://php.dreamwerx.net/forums/viewtopic.php?t=6 Very fast mysql storage implementation in PHP, port the design to whatever lanaugage suits you. On Wed, 21 Apr 2004, adrian Greeman wrote: Please excuse a very simple inquiry from a near beginner If I wish to store a complete Word file or similar (Open Office perhaps) or an archive file - is that possible and what kind of field do I need for it? How do you insert a file into a table if so? I know that you can store pictures - what field do I need for those and again how to insert? Regards Adrian -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Saving file into database
On Fri, 12 Mar 2004, Jigal van Hemert wrote: I've been reading this thread, but I can't see the advantages of storing files in a database. I've always had the impression that a *file* system was the appropriate place to store files. Scalability, searching, security.. just a few.. Replication seems more silly, since one also copies the stored files to the slave. This not only adds to the traffic between slave and master, but also wastes diskspace. Disk is relatively cheap.. And if your using mysql file storage on a large site you'll probably need replication to feed those data hungry frontend webservers.. Furthermore large files may cause you to hit the max_allowed_packet size (PDF's of 5MB or larger are no exception in real life). Anyone implementing mysql file storage using largeblobs, I feed needs to re-address their storage implementation. I've built a system which includes the possibility of downloading one's own bills in PDF format. This system runs on load balanced webservers with one mysql server. The files are stored on a different machine that cannot be accessed directly from the web. Scripts validate access to the PDF and serve it to the client. Works like a charm ;-) Sweet.. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Saving file into database
I store any kind of files, PDF/word/etc.. I just like not having lots of directories with 1000's of files each in them... Seems more organized to me.. On Thu, 11 Mar 2004, Erich Beyrent wrote: Use the BLOB, Luke! See your local MySQL manual for details. We're using BLOBs to store PDF in our database, and through the use of HTTP headers, we're able to let user download the PDFs without having to store a local copy on disk, directly from the database (content-disposition header). Hi Kurt, I have been using MySQL to store links to PDFs which live in other directories. Is there an advantage to storing the PDFs directly into the database? -Erich- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Saving file into database
It does make the database larger.. as far as overhead... As you can't just store the file as a blob.. You'll need some referencing data in order to find it, and restore it back out of the database.. I just checked out my database (100's of files) which has: Total file size: 1765.34MB Mysql files are: -rw-r-1 mysqlmysql 23216 Feb 27 03:49 file.MYD -rw-r-1 mysqlmysql 10240 Feb 28 03:50 file.MYI -rw-r-1 mysqlmysql8756 Feb 23 2003 file.frm -rw-r-1 mysqlmysql1808037152 Feb 27 03:53 filedata.MYD -rw-r-1 mysqlmysql 400384 Feb 28 03:50 filedata.MYI -rw-r-1 mysqlmysql8614 Feb 23 2003 filedata.frm So it's not too bad as far as overhead.. On Thu, 11 Mar 2004, Eve Atley wrote: Is there an advantage to storing the PDFs directly into the database? I'm also curious how large this would make a database. Is there any space saved through this method, or would they still be the same size as the original PDF? - Eve -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: BLOB, SUBSTRING and 65536 characters limit
http://php.dreamwerx.net/forums/viewtopic.php?t=6 storage implementation that is not affected by max_packet_size. On Thu, 11 Mar 2004, Tomas Zvala wrote: Hello, I run into a problem where I need to get contents of BLOB to my php script. I found out that I'm limited by max_packet_size (which i can't change because of my ISP) and i tried to work around it by using SELECT substring(column,x,1024) where x is number increasing in steps of 1024. But once x reaches 65536 i get empty result. What am I doing wrong or how can I work around this? I'm using MySQL 4.0.14 running on FreeBSD 4.9. Thanks in advance for your help, Tomas Zvala -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Saving file into database
Check this article: http://php.dreamwerx.net/forums/viewtopic.php?t=6 Port code/design to perl or whatever client language you want.. mysql could care less once it's got the data (correctly) On Tue, 9 Mar 2004, Isa Wolt wrote: Hi, I would like to save a binary file into a mysql database, for later being able to use the file. I am using a perl interafce. Is this at all possible??? And would it be possible to then read that file from a c++ interface? would be greatful for any help/advices! Isa _ Hitta rätt på nätet med MSN Sök http://search.msn.se/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: ORDER BY RAND() performance
If your infact (sounds like) storing the pictures meta-data (name, size, owner, etc) and the data (blob of some kind) .. I would definately break up the design into 2 tables. That way when dealing with the meta-data table (your RAND() query) there is much less data that needs to be traversed to get your answer, which should result in a faster query. On Mon, 8 Mar 2004, Neil Gunton wrote: Hi all, I am using MySQL 4.0.x to run a community website which has (among other things) over 19,000 pictures. There is a page that selects 30 random thumbnails. I have noticed that the performance of ORDER BY RAND() on this table has a significant impact on performace. I have all the relevant indexes defined, and I have researched this issue on the Web. It seems that other people have also encountered a performance hit while using ORDER BY RAND(). The reason appears to be that when you do EXPLAIN on a query using this, MySQL reports Using temporary; Using filesort, which is the worst possible result. Also, the number of rows reported is pretty much the entire set. So, presumably, the current implementation of ORDER BY RAND() means that MySQL has to traverse the entire table, regardless of other indexes. There are, of course, other ways to get around this, but they are all more complex than simply using ORDER BY RAND(). I think that selecting a random number of records from a table is something that a lot of websites would like to be able to do, and so as datasets get larger it would be nice to see this function scale well. For anyone who has a website with a large archive of data, the ability to present a random selection of this data is very useful. I would like to know if anyone knows if the MySQL team is aware of this problem, and if so whether they are planning on improving it at any point. I ask mainly because if I am told that yes, it'll be much better in version X then I can live with the couple of seconds that it takes currently, knowing that this will be better down the line. However if I am advised that this is a fundamentally hard problem for whatever reason, then I will put the effort into reworking my tables to use an alternative solution. The only real solution that I can see which is fast is to make another table which contains just the unique IDs of the pictures that are visible (there are others which are not publicly visible, and which shouldn't be included in the random query, so making a separate table with the appropriate subset makes sense for performance). This new table will have a primary key which is a numeric sequence field. Every record will have its own sequence number, going from 1 up to the number of records. Then, instead of doing one query with ORDER BY RAND() LIMIT 30, I can instead do 30 queries, each with a different random sequence (generated from Perl), which will look up the unique sequence number. Since this is a primary key, it will be very fast, so that doing 30 queries will not have a big performance impact. However this scheme requires that the sequences in the new table be kept very consistent - for example, if a picture is removed from the sequence then the sequence numbers above that record have to be updated. This introduces potential for error, but it is a possible solution. I don't want to implement it, obviously, if ORDER BY RAND() is slated for improvement. Thanks for any ideas or insights... -Neil Gunton -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: ORDER BY RAND() performance
Donny, what do you do? Throw all the values into an array or something on the client side, and use a random number generator to pull out the array elements? I suppose (depending on resultset size) pulling that many rows from server to client and handing on client side could be faster... On Mon, 8 Mar 2004, Donny Simonton wrote: ORDER BY RAND(), just sucks in my opinion. We have created our own internal randomization system because pretty much everytime you use it will show up in the slow query log, because of the using temporary, using filesort it does. Splitting your data into a hundred tables will still make it using temporary, using filesort. I just did a little test, where I only had 5 entries in a table, and I using temp using filesort. Will it ever be improved? Probably the same time order by DESC is improved. Donny -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Monday, March 08, 2004 2:05 PM To: Neil Gunton Cc: MySQL Subject: Re: ORDER BY RAND() performance If your infact (sounds like) storing the pictures meta-data (name, size, owner, etc) and the data (blob of some kind) .. I would definately break up the design into 2 tables. That way when dealing with the meta-data table (your RAND() query) there is much less data that needs to be traversed to get your answer, which should result in a faster query. On Mon, 8 Mar 2004, Neil Gunton wrote: Hi all, I am using MySQL 4.0.x to run a community website which has (among other things) over 19,000 pictures. There is a page that selects 30 random thumbnails. I have noticed that the performance of ORDER BY RAND() on this table has a significant impact on performace. I have all the relevant indexes defined, and I have researched this issue on the Web. It seems that other people have also encountered a performance hit while using ORDER BY RAND(). The reason appears to be that when you do EXPLAIN on a query using this, MySQL reports Using temporary; Using filesort, which is the worst possible result. Also, the number of rows reported is pretty much the entire set. So, presumably, the current implementation of ORDER BY RAND() means that MySQL has to traverse the entire table, regardless of other indexes. There are, of course, other ways to get around this, but they are all more complex than simply using ORDER BY RAND(). I think that selecting a random number of records from a table is something that a lot of websites would like to be able to do, and so as datasets get larger it would be nice to see this function scale well. For anyone who has a website with a large archive of data, the ability to present a random selection of this data is very useful. I would like to know if anyone knows if the MySQL team is aware of this problem, and if so whether they are planning on improving it at any point. I ask mainly because if I am told that yes, it'll be much better in version X then I can live with the couple of seconds that it takes currently, knowing that this will be better down the line. However if I am advised that this is a fundamentally hard problem for whatever reason, then I will put the effort into reworking my tables to use an alternative solution. The only real solution that I can see which is fast is to make another table which contains just the unique IDs of the pictures that are visible (there are others which are not publicly visible, and which shouldn't be included in the random query, so making a separate table with the appropriate subset makes sense for performance). This new table will have a primary key which is a numeric sequence field. Every record will have its own sequence number, going from 1 up to the number of records. Then, instead of doing one query with ORDER BY RAND() LIMIT 30, I can instead do 30 queries, each with a different random sequence (generated from Perl), which will look up the unique sequence number. Since this is a primary key, it will be very fast, so that doing 30 queries will not have a big performance impact. However this scheme requires that the sequences in the new table be kept very consistent - for example, if a picture is removed from the sequence then the sequence numbers above that record have to be updated. This introduces potential for error, but it is a possible solution. I don't want to implement it, obviously, if ORDER BY RAND() is slated for improvement. Thanks for any ideas or insights... -Neil Gunton -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives:
Re: Dream MySQL Server?
I'd go with raid 1+0 ... Be a shame to have that much cpu power and become I/O bound.. This way you've got 4 disks feeding the cpu's instead of 2.. Better performance than raid 5, and only 2 more disks than your current config. On Tue, 2 Mar 2004 [EMAIL PROTECTED] wrote: I have a requirement for a system that is of the order of 8-10 times the size of my current system. Unfortunately (a) I don't know how many times larger it actually is, and (b) my current system, while very happy, even relaxed, on its current hardware, has not yet been subjected to the full rigour of the target number of users. So it is very difficult to estimate what hardware I need to specify for the new system. Fortunately, the budget is fairly generous. Obviously, we don't want to gold-plate the system - but if a bit of overspend gives faster-than-specified performance, that will be a gain rather than wasted money. So I can get a lot of hardware - if I can confidently state that it will improve MySQL performance. So what should I be planning to use? The database is quite small - 2-4 Gb, but high churn: maybe 25% of it replaced every day. Reads dominate writes, but not overwhelmingly: at a guess, 10:1. The current hardware is dual Xeon 2.0, 2Gb, single Scsi disk. The one fixed factor is that the OS is Windows 2000 (I know the arguments for Linux/BSD, but that is not feasible). Scanning a PC manufacturer's website, it seems easy to get 4x2.5GHz Xeon, 1Mb L3, 8Gb ram, dual 15000 rpm Scsi with Raid 1 (for performance as well as reliability). Does this sound balanced for a MySQL engine? Or what would other people advise? Thanks for any advice, Alec -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Dream MySQL Server?
Never think enough is enough.. Current operation levels can easily be pushed many times their current level/ratio in a short matter of time, and databases can grow rapidly (Even tho it's not identified here) I have spec'd boxes before based on someone reccomendations for load, and then found 2 months later the box is choking, time to upgrade = potential downtime.. 2 extra disks is minimal cost for the I/O boost and amount of extra capacity you'll get. I agree with the write cache and battery backup... Any decent raidcard has that option for a few bucks... 4GB database today, can easily turn into a 20GB database tomorrow.. Never think small, On Tue, 2 Mar 2004, Jochem van Dieten wrote: [EMAIL PROTECTED] wrote: I'd go with raid 1+0 ... Be a shame to have that much cpu power and become I/O bound.. This way you've got 4 disks feeding the cpu's instead of 2.. Better performance than raid 5, and only 2 more disks than your current config. If you have 8 GB of RAM and 4 GB of database, you would only become I/O bound if write a few hundred blocking commits per second to the disk*. In that case, having a battery backed RAID adapter with write cache enabled is a much better way of improving performance as going from RAID 1 to RAID 1+0. *Presuming the OS has a sane disk cache or is 64 bit. Jochem -- I don't get it immigrants don't work and steal our jobs - Loesje -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: copying blob data to remote box
Are you just copying the files? I'd suggest using mysqldump if you are not already.. On Tue, 10 Feb 2004, Scott Purcell wrote: Hello, I am running a DB on a machine in which I am developing on. Then I have been copying the contents of ~mysql/data/databasename to another box where I am running my code. All the data I modify copies well, and I can run on my production box with no problems. Except for binary blob data. When I go to my production box, the blob data does not seem to work properly. Are there other files, directories that I need to copy over in order to get this to work? The machines are not networked together, I have to move files via CD. Thanks, Scott -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: copying blob data to remote box
By default mysqldump just dumps to stdout.. so you need to (pipe) it to a textfile.. the correct syntax, you need to put the database name after the username/password (after the mysql options) .. give that a shot.. most likely all the garbage/output wacked out your session.. On Tue, 10 Feb 2004, Scott Purcell wrote: Well, I gave it a shot, but for some reason, the dump caused my command window to die, while screaming. The database is owned by a user with a password, so I typed the following: mysqldump databasename --user=xx --password= It started doing something, but like I mentioned, the system begain complaining. I am looking at the mysql --help, but I do not see anything about a password, or about where to put the files. Thanks, -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Tuesday, February 10, 2004 8:24 AM To: Scott Purcell Cc: [EMAIL PROTECTED] Subject: Re: copying blob data to remote box Are you just copying the files? I'd suggest using mysqldump if you are not already.. On Tue, 10 Feb 2004, Scott Purcell wrote: Hello, I am running a DB on a machine in which I am developing on. Then I have been copying the contents of ~mysql/data/databasename to another box where I am running my code. All the data I modify copies well, and I can run on my production box with no problems. Except for binary blob data. When I go to my production box, the blob data does not seem to work properly. Are there other files, directories that I need to copy over in order to get this to work? The machines are not networked together, I have to move files via CD. Thanks, Scott -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: images from MySQL backend used with MS-Access 2000 frontend
Read this article for it's design on database storage.. I have several big implementations on this design which are fast, reliable and scalable (one of them in the medical field aswell) http://php.dreamwerx.net/forums/viewtopic.php?t=6 On Sun, 1 Feb 2004, Yuri Oleynikov wrote: Hi everyone, I am new to MySQL. I am starting a database project for a medium-sized medical office. I chose MS-Access (on 10-terminal pre-existing Win2K intranet) for rapid front-end dev and familiarity to the existing users. I plan to use MySQL on Linux server for backend for its speed, great support and ease of use. The database will consist of doctor's reports, schematic drawings and photos (500K JPGs and 50K PICTs). I am planning to implement that with BLOB or, alternatively, with file system while storing links to files. Later we may add webaccess through MySQL webserving or by using Access webserving options. Has anybody done anything similar? What is the best way to implement image storage (and, in future, movies, maybe)? Thanks a lot. yuri. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Mysql listed in spamcop?
This may have been mentioned.. I have not been recieving message for 12+ hours now.. And it appears: Jan 13 01:23:49 cyclone tcplog: smtp connection attempt from 213.136.52.31 Jan 13 01:23:50 cyclone sendmail[10674]: ruleset=check_relay, arg1=lists2.mysql.com, arg2=213.136.52.31, relay=lists2.mysql.com [213.136.52.31], reject=553 5.3.0 Spam blocked see: http://spamcop.net/bl.shtml?213.136.52.31 Somehow the mysql mailserver got listed in spamcop? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL as document storage?
This article discusses it briefly: http://php.dreamwerx.net/forums/viewtopic.php?t=6 I am using this type of design/technology for quite a few clients. Some storing gigs and gigs of data (images, documents, pdf, anything) over multiple servers. The scalability and performance of a well designed system I think are very close to standard filesystem storage. I have currently written http and ftp gateways to access data stored in mysql dataservers(databases). Quite easy and fast... I have yet to do much with searching and indexing of files (not required) but I would imagine you could have very fast searching features. On Wed, 7 Jan 2004, Steve Folly wrote: Hi, (disclaimer - this thread could easily go off topic; I'm interested only in the MySQL aspects of what follows...) At work we are currently investigating ways of filing all our electronic documents. There is commercial software that will do this I know, but I was wondering whether MySQL would be suitable for this type of thing. The 'documents' could be literally any binary file. My idea would be to create a table with a blob column for the document itself, and document title, reference number, keywords, other meta-data. And a web-based front-end to search and serve documents. Although the documents could be any file, the majority would be textual documents (Word documents, PDF, etc). How would one go about indexing such data, since full text searches operate on textual columns? How to cope with columns exceeding the max packet length? Why is there a max_packet_length setting; surely this is low-level stuff that shouldn't affect query and result sizes? Is storing the actual documents in the database such a good idea anyway? Perhaps store the file in a file system somwhere and just store the filename? If anyone has experience in doing (or been dissuaded from doing) this kind of application your thoughts and comments would be appreciated. (If only to tell me don't be so stupid, it'll never work :) Thanks. -- Regards, Steve. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: extending a database over several servers
Yes and no.. mySQL itself cannot do this.. If you need to keep growing in size (on 1 server) you may want to look at some kind of LVM disk array/SAN you can keep plugging in disks and extending the volume.. I do kinda of what you are looking for with 1 application, but it is all software controlled, mysql has no knowledge it's happening. There is 1 controller sql server that controls all the storage, it knows of multiple other sql servers and decides where to store data and where all the data is for retrevial. You need more storage, plug in another server, tell the controller it exists and it will start filling up with data and serving requests for that data. On Mon, 29 Dec 2003, Daniel Page wrote: Hi, Is it possible to extend a MySQL database over several servers in such a way that when space is exhausted on one server, a second machine can be added and used? Cordially, Daniel Page -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: storing .tar files in mysql
This page has sample article/code how to store any type/size of file in mysql.. Depending on the appliation it could be a good idea (such as revision control or something) http://php.dreamwerx.net/forums/viewtopic.php?t=6 On Fri, 12 Dec 2003 [EMAIL PROTECTED] wrote: I am working with a project on sourceforge http://leopard.sourceforge.net and this is one of the package management stratagies we are thinking about trying. As I said I have almost no experience with mysql so I open to any and all suggestions. Very good points being made about the size of the rows. Thanks for the quick responses :-) Jake Walters Can I ask why? Why not define a char(50) (or whatever size) with the relative or complete path to the .tar file? Storing it in your database would create huge row sizes. Joshua Thomas Network Operations Engineer PowerOne Media, Inc. tel: 518-687-6143 [EMAIL PROTECTED] --- In theory there is no difference between theory and practice. In practice there is. - Yogi Berra --- -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Friday, December 12, 2003 3:55 PM To: [EMAIL PROTECTED] Subject: storing .tar files in mysql Hi all, I am new to mysql and I was wondering if someone could point me in the right direction on how to store .tar and .tar.gz (bzip2) files inside a mysql database. I have googled to try and find some help there but most of the hits come back with binary image files. I have gone thru the mysql tutorial and I can create the database and tables, but I can't seem to insert the .tar file properly...Any pointers would be appreicated... Thanks, Jake -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: storing .tar files in mysql
I'd agree with chris. I've got a ton of data/files in mysql for years now and no problems... The thruput in/out is increadible if you implement the storage handler correctly. Plus it gives you certain advantages such as security/scalability/etc... With storing the files on disk, the files need to be on, or be accessable by the webserver directly via filesystems... This way you need to connect to a mysql server (ideally a different box) using a username/password.. It's a debate that will go on forever ;) On Sat, 13 Dec 2003, Sime wrote: Chris Nolan wrote: Personally, I would have to disagree. I have just completed a year-long uni project whereby we built some software that stored massive images in InnoDB tables. We found that the performance was (at very worst) comparable to the alternative method. Additionally, this method is better on many levels. You can have a sealed server and you protect yourself from users who like playing with things (like filenames). On Sat, 2003-12-13 at 08:13, Neil Watson wrote: I believe the data type you are looking for is blob. However, you would be better off using your row to point to a file located on the hard drive instead of actually in the database. I can't recall the technical details but, your performance will be much better that way. I'd agree with Neil. I've never believed in that, surely it's just causing overhead for MySQL. Surely far better to have the files managed by the file system? That's what it's there for. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: storing .tar files in mysql
16MB? you mean the max packet per query limit? If your storing data in huge/large blob then you are making a big mistake in my opinion and taking a huge performance hit... I've got files over 1GB in size in mysql now.. they went in and out at almost filesystem speed... On Sun, 14 Dec 2003, Chris Nolan wrote: Forgot something in my other reply. With the NAS - what's to say that MySQL's retrieval and network protocol is not more efficient than whatever is running on your NAS boxes? Conversely, MySQL's current 16 MB per transfer limitation may very well not allow it to act in this role at all. Ah, the wonders of open discussion! Best regards, Chris On Sat, 2003-12-13 at 23:33, Joshua Thomas wrote: you could very well do that, and frankly that is how alot of websites work. Yep, including one I run. That site has to generate img and a href links for visitors, and it seems far easier to return /pics/imagefoo.jpg then the image itself and decide how to embed that into the page. But storing the actual binary file, weither it is a .jpg or .tar, allows your application/website/whatever to be independent of your files location on the server. Is this really such a problem? When I migrated the above-mentioned website, I had some minor path issues which resolved with symbolic links. If your path for images changes, make a symlink to simulate where it used to be; or use an UPDATE statement to change the path prefix on the images in the database. If you really want to be clever, use a single-table row for the prefix (/home/foo/myimages) and another table for the actually names (foo.jpg) and just update the prefix when location changes. This way, when you select a specific row, you will have the binary... regardless of path locations. It also makes data organization cleaner as well. If you decide you don't need a binary anymore and delete it from the database, it will be gone. A cron job which does a SELECT imagename FROM imagetable, then compares to the directory, and removes the non-exisiting images, would also work. Or you could write a database trigger to call out and delete the image when the row was deleted. I do admit it's less nice than being able to just drop the image from the table, but you also lose the ability to manipulate the content on the filesystem level. For example, in your .tar example, how are you going to view the contents of one of those .tar files? You'll have to SELECT it out of the database, write it somewhere, then view it; if you want to update it, you have to SELECT it out, edit it, and UPDATE the table... where as just storing the path will not, which can get pretty ugly. I hope that helps Works like a charm for us. The downsides I see to storing in the database are: * More overhead reading and writing from the database * Much much larger database sizes. For me it's easy to backup and restore my database. If my 2GB+ of images were IN the database, well, that would be a different story. * I can schedule backups of the database, and filesystem content, at different times; this reduces the impact if something goes wrong during a backup. Of course, I do see the advantages of embedding the images, tar files, etc into the database; I just think it has enough issues of it's own that I would caution against doing it. Here's another idea to chew on: My full-time employer runs over a thousand sites for newspapers across the country. We have image names embedded into the database, not the data. Why? Well, we store images on NAS (Network Attached Storage) devices. We have several of them, and use custom scripts to keep the content on them the same. When we have a failure on one, we simply update a piece of code which defines which server to read from. Now, if we put those images into the database, we'd have a few issue: * Our database size would grow far, far beyond the current size, which is over 60GB. * Restores to the database would be much slower, and in our case, if we have a failure, we'd rather get everything up fast and then fix images then wait longer and get both. Time == money to our customers. * Instead of returning the filename to the application software (ASP and CF in our case), you'd return the whole image, and that's going to be a huge drain on server CPU. We'd need something like 2x the CPU power we have now, with many more high-speed disks. (and 15K RPM isn't cheap!) * Replication from the publisher to the subscribers would be much more intensive and require more bandwith and CPU power, again driving our hardware needs up. You could argue that we'd loose the need for the NASes. I haven't done a comparison on how the pricing would work. That's my .02$ and a then some. Joshua Thomas dan Joshua Thomas wrote: Can I ask why? Why not
RE: storing .tar files in mysql
True initially... What I've done is use a java appserver frontend (orion) that's a caching server.. It gets the request, checks if it has the image in it's memory cache, if so serves it, otherwise goes to the backend and gets it, stores in memory cache, serves it.. Very fast and aleviates alot of redundant queries.. You can also set an expire time, etc on the content.. I've seen a PHP implementation of this aswell storing the files in /tmp and checking their timestamp to see when to refresh... You could perhaps use something like squid or something to cache/proxy images aswell.. never tried it though.. good luck.. On Sat, 13 Dec 2003, Joshua Thomas wrote: If we put everything into the database, then I have each call for an image go to the database (unless there is a better method?), so I have several database calls and several HTTP calls for each primary page. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: CONCAT on BLOB
This is the article for you: http://php.dreamwerx.net/forums/viewtopic.php?t=6 Shows how to store large files in database... I've currently got gigs and gigs of files in mysql using this method.. On Tue, 2 Dec 2003, Jim Kutter wrote: Hi folks. I'm storing files in a BLOB table for a number of reasons for my web app, and I'm trying to break up uploads so I insert smaller chunks (my app imposes restrictions on memory usage, so reading the whole file at once and doing an insert doesn't work). What I do is insert the record, then call a series of UPDATE queries on the record concating the new chunk with the existing blob column. The problem I'm having is that the size of the blob column is WAY off for larger files (2MB or so). My app reports that the entire file was read properly, but not all of it was inserted properly into the DB. Any ideas on why this is happening? Any suggestions for alternatives? Storing the files on the filesystem is not an option for me. Thanks I'm using 3.23.36 (linux RedHat) with MyISAM tables. -jim kutter -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: notification service
Might just create a common table that stores messages back and forth.. it stores sender id, recipent, message, etc.. each server polls the table ever so often (cronjob) for messages for it and processes them, removing them from the queue.. it's like a simple message broker.. On Tue, 2 Dec 2003, Jinsong Zhu wrote: I'm looking at MySQL for a project where multiple applications sharing the same database need to collaborate. For example, if one application made a change to a table, another application would be notified via an event (the applications may run on different host machines). However, I have not been able to find this kind of feature (eg. a notification service) in MySQL. Can MySQL do that at all? Thanks. - Do you Yahoo!? Free Pop-Up Blocker - Get it now -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Images in a table
Be warned about hitting the default max_packet_size limitation of mysql which will cause large files to not insert. This link shows another way to overcome that limitation: http://php.dreamwerx.net/forums/viewtopic.php?t=6 On Mon, 1 Dec 2003, Mickael Bailly wrote: Here is a sample code in PHP: $image = file_get_contents('/tmp/my_image.png'); $res = mysql_query('insert into img_table ( data ) values (\''.addslashes($image).'\')'); Works with a table like: create table img_table (bigint not null auto_increment primary key, data LONGBLOB not null default ''); Hopa that helps On Saturday 29 November 2003 19:41, Zenzo wrote: How can I insert images in a table if I can do it with MySQL? __ Do you Yahoo!? Free Pop-Up Blocker - Get it now http://companion.yahoo.com/ -- Mickael Bailly -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Table in Memory
Maybe look at using a HEAP table? Load it on startup from a datasource.. On Wed, 5 Nov 2003, Arnoldus Th.J. Koeleman wrote: I have a large table which I like to store into memory . Table looks like Spid_1__0 (recordname varchar(20) primary key, data blob not null ) what is the best way todo this in mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Pictures In Table.
http://php.dreamwerx.net/forums/viewtopic.php?t=6 This code has served millions of binary objects (pics, files, etc) for me with no problems.. good luck. On Tue, 1 Jan 2002, Braulio wrote: What is the best method to use to include pictures in tables? I am using PHP. I have several customers that I maintain their real estate site and the maintenance is sometimes hectic. I am thinking that having the photos in a mysql table will make it easier to maintain these sites. Thanks in advance, Braulio -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: blobs in mysql
I usually use ps.setBytes() and pass it a byte[] array .. On Wed, 22 Oct 2003, Scott Purcell wrote: Hello, I have opted to insert some small jpg files into the mysql database using java. Code below. When I do a select from the table using the mysql command line, it generates pages of --- --- which I guess is a visual of the blobed data. Anyway, I am used to Oracle I guess where it still shows the data, but shows a blob in the column and not all the lines. Am I doing this wrong, or is this the way it is handled? Here is my code, the api showed using a prepared statement. It works, but the selecting does not look proper. Thanks, Scott ### code below) Connection db = null; PreparedStatement stmt = null; ResultSet rs = null; try { db = DriverManager.getConnection( jdbc:mysql://127.0.0.1:3306/menagerie, root, x); InputStream in = new FileInputStream(f); stmt = db.prepareStatement(insert into preview (username, theblob) values (?, ?)); stmt.setString(1, louie-louis); stmt.setBinaryStream(2, in, (int)f_length); -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Multiple Image inserts into a MySQL BLOB column
Checkout http://php.dreamwerx.net/forums/viewtopic.php?t=6 For a PHP example you could easily convert to PERL or just install PHP standalone binary on the box. On Fri, 3 Oct 2003, Zafar wrote: Hello Having trouble inserting images into a BLOB column. No problems doing this 'one at a time' via a third party MySQL GUI Manager, but I need to create an indexed table containing some 1K + images. Using the recommeded method from the MySQL manaual ie. UPDATE tbl_name SET blob_column=LOAD_FILE(/tmp/picture) WHERE id=1; only sets the blob field to NULL (0 bytes) ! - even where an image is stored perfectly well. There has to a SAFE METHOD of doing these image inserts from within a 'while' 'for' or some other loop statement in a shell script or perl routine. Appreciate enlighenment on this topic from someone who has been down this road and figured it out. Rgds, ZCH -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: slow performance with large or list in where
Might instead want to look at where fooid in (xx, xx, xx, xx) On Sat, 4 Oct 2003, Marc Slemko wrote: If I do a query such as: SELECT * from foo where fooid = 10 or fooid = 20 or fooid = 03 ... with a total of around 1900 or fooid = parts on a given table with 500k rows, it takes about four times longer than doing 1900 separate queries in the form: SELECT * from foo where fooid = 10 fooid is the primary key. My real goal is to do updates in a similar way, they have the same issue. And my real goal is actually doing the same thing where instead of fooid=10 I have a multicolumn primary key and use (keycol1=10 and keycol2=2 and ...) My examples here are the simplest case I can find that exhibit the issue. Explain plan on the select shows: table typepossible_keys key key_len ref rowsExtra adsummary range PRIMARY PRIMARY 31 NULL1915Using where so it is doing a range scan in some manner. Given that the rows I'm updating will normally be cached in memory, is there anything I can do to force it to just do repeated index lookups like the individual queries are doing? The reason I don't want to use individual queries is because my performance is then bottlenecked by latency, and when trying to update thousands of rows a second that doesn't work out. Just opening multiple connections and doing them in parallel is a problem because of where the updates are coming from and the fact that they have to be done inside a transaction so other connections would just end up blocked anyway. Currently running 4.0.15a. I'm trying to avoid going to 4.1 at this point, but it does look like the INSERT ... ON DUPLICATE KEY UPDATE ... syntax may do individual index lookups the way I want... I haven't verified that yet though. Any suggestions are greatly appreciated. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Blob fields
Any mysql encryption functions would be done server side ofcourse before putting it into the database.. I'd just incorporate an de/encryption scheme into your client app, and insert as standard BLOB string to remote server. On Sat, 4 Oct 2003, sian_choon wrote: Hi, I have the question related to this topic, hopefully you could help me on this. Is that possible that we insert an encrypted image into blob fields where the encryption is done by using mysql existing function (AES_Encrypt) from the client side? If yes, how is the procedure ? Thanks in advance. Jeremy Zawodny [EMAIL PROTECTED] wrote: On Wed, Oct 01, 2003 at 05:51:18PM +0100, Angelo Carmo wrote: I people, Who knows how to insert an image file into blob fileds. Lots of us know how. And we've discussed it on the list about 600 times already. I'm sure you'll find an answer in the list archives. Jeremy -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! | http://jeremy.zawodny.com/ MySQL 4.0.15-Yahoo-SMP: up 17 days, processed 630,933,987 queries (412/sec. avg) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] - Do you Yahoo!? The New Yahoo! Shopping - with improved product search -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Usage Monitoring
Might try mytop (search google for it) .. jeremy z wrote it.. it works well for realtime monitoring.. On Tue, 23 Sep 2003, John May wrote: Is there any way to monitor which databases are being used the most heavily on a MySQL server? Thanks for any info! - John -- --- John May : President http://www.pointinspace.com Point In Space Internet Solutions [EMAIL PROTECTED] LPA Corporate Partner / FSA Associate / ACN Member Lasso 5 + 6 / PHP / MySQL / FileMaker Pro Hosting Now Available! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Running a definition file from MS-SQL
Most likely you'd need to do some datatype mapping changes to the script... Everyone I know who's had to do this has typically used something like sqlyog (search google) and used the ODBC import cabability to transfer data from MSSQL - MySQL.. On Thu, 18 Sep 2003, Tormod Halvorsen wrote: Hi all! (My apologies if this has been beaten to death allready, but I *did* search the archives without much luck.) I'm moving from MS SQL to MySQL. Before jumping from the old server, I scripted out the definitions for tables and their fields, thinking I might just run them into MySQL. Guess I need to edit the script some, because it keeps giving me a Syntax error message. Anyone know of a good source for info on how to make MS SQL scripts work on MySQL? I also have a copy of the database exported in MS Access format to hold the data - don't suppose it's any easier that way, uh? Thanks! peace, Tormod in Stockholm -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Replication question
I'm pretty sure you need to sync the entire database (all tables) to all slaves before starting replication..Your servers are technically already out of sync.. And no wonder it crashes, tables are missing in it's view..You need to hit the initial replication setup manual pages.. On Thu, 18 Sep 2003, Andrey Kotrekhov wrote: Hi, All! SQL, mysql I have 2 servers. 1-st is master, 2-d - slave. But slave store only some tables of master. For example, master has tables A, B But slave has only A table The problem: query: UPDATE A,B SET A.a=B.a WHERE A.c=B.c; the result: crash replication on slave; Is it right? Best regards. ___ Andrey Kotrekhov [EMAIL PROTECTED] ISP Alkar Teleport . +380 562 34-00-44 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Retrieve a binary file from a mysql database
From mysql manual: If you want to create the resulting file on some other host than the server host, you can't use SELECT ... INTO OUTFILE. In this case you should instead use some client program like mysqldump --tab or mysql -e SELECT ... outfile to generate the file On Fri, 12 Sep 2003, karim bernardet wrote: Hello, Is it possible to retrieve a file stored in a blob field using a mysql client from a host which is not the server ? Thanks for your help, Karim. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: ERROR 1030: Got error 127 from table handler
Are you running linux and is it SMP? Kernel version plz.. On Wed, 10 Sep 2003, Dathan Vance Pattishall wrote: /usr/local/mysql/bin/perror 127 Error code 127: Unknown error 127 127 = Record-file is crashed I've been getting this allot lately from mysql-3.23.54-57. Things that are not the cause: - mySQL has not been improperly shut down - threads are not being killed off Pattern emerged: High Traffic DB up for a month or greater doing a constant 300 queries/sec Question: What is the cause of a Record-file crash? How can this be prevented? Jeremy Zawodny do you have any thoughts on a root cause? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Table is full error
Most likely it's the 4GB OS limitation... My suggestion is to create a new table using mysql's built in raid option... span the table over multiple files to allow of much larger table growth... migrate all the rows over to the new spanned table.. On Thu, 4 Sep 2003, Keith C. Ivey wrote: On 4 Sep 2003 at 10:53, Keith Bussey wrote: Your suggestion seemed to wipe out my rows ! (s'ok I got a backup ;p) Yikes! Are you running into any file system limits? Have you dealt with files larger than 4 GB on that server before with no problems? If not, you may have run into a MySQL bug of some sort. An alternative way to get the table to have 5-byte pointers would be to create the new table (same CREATE TABLE query as for the old structure, except add MAX_ROWS=70 AVG_ROW_LENGTH=2 to the end) and then copy all the records into it: INSERT INTO email_body_NEW SELECT * FROM email_body; Actually, the exact figures for MAX_ROWS and AVG_ROW_LENGTH shouldn't matter, as long as their product is between 2**32 and 2**40 - 1. -- Keith C. Ivey [EMAIL PROTECTED] Tobacco Documents Online http://tobaccodocuments.org -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Table is full error
Hrm.. interesting.. I know I've personally hit the 4gb limit before and had to do a fix for it.. but never hit an internal tablesize limitation.. You didn't mention what version of mysql your running... Also if the table has alot of columns and 1 of more of those columns has alot of data, perhaps considering looking at normalizing the table into a few? Could potentially speed up operations.. Not the easy fix your looking for but a thought... Try running: SHOW TABLE STATUS FROM database LIKE 'table_name'; It will check the max index/data sizes mysql is configured/allocated for... On Thu, 4 Sep 2003, Keith Bussey wrote: I do have another Database on the same machine which has a table that is a lot bigger (about 9 GIGs) and it wasn't created with any of the special table options suggested by Keith C. Ivey below. The difference is this table has many more fields, while the email_body one (the one with the problem) as only 2: an ID autonumber field, and a text field. Perhaps there is some bug/limitation in Mysql whereby a field can only have so much size ?? -- Keith Bussey Wisol, Inc. Chief Technology Manager (514) 398-9994 ext.225 Quoting Colbey [EMAIL PROTECTED]: Most likely it's the 4GB OS limitation... My suggestion is to create a new table using mysql's built in raid option... span the table over multiple files to allow of much larger table growth... migrate all the rows over to the new spanned table.. On Thu, 4 Sep 2003, Keith C. Ivey wrote: On 4 Sep 2003 at 10:53, Keith Bussey wrote: Your suggestion seemed to wipe out my rows ! (s'ok I got a backup ;p) Yikes! Are you running into any file system limits? Have you dealt with files larger than 4 GB on that server before with no problems? If not, you may have run into a MySQL bug of some sort. An alternative way to get the table to have 5-byte pointers would be to create the new table (same CREATE TABLE query as for the old structure, except add MAX_ROWS=70 AVG_ROW_LENGTH=2 to the end) and then copy all the records into it: INSERT INTO email_body_NEW SELECT * FROM email_body; Actually, the exact figures for MAX_ROWS and AVG_ROW_LENGTH shouldn't matter, as long as their product is between 2**32 and 2**40 - 1. -- Keith C. Ivey [EMAIL PROTECTED] Tobacco Documents Online http://tobaccodocuments.org -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Table is full error
On Thu, 4 Sep 2003, Keith Bussey wrote: Running that shows me the following: mysql SHOW TABLE STATUS FROM email_tracking LIKE 'email_body_old'; ++++++-+-+--+---++-+-+-++-+ | Name | Type | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Create_options | Comment | ++++++-+-+--+---++-+-+-++-+ | email_body_old | MyISAM | Dynamic| 208853 | 20564 | 4294967292 | 4294967295 | 1820672 | 0 | 450438 | 2002-10-21 11:58:24 | 2003-08-31 01:01:41 | 2002-11-30 01:41:00 || | ++++++-+-+--+---++-+-+-++-+ 1 row in set (0.00 sec) There's your problem... your 3 bytes off your max_data_length .. which is giving the table full error Check: http://www.mysql.com/doc/en/Full_table.html at the bottom for help to alter table to keep growing -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Table is full error
I'm not too familiar with this.. someone else today used the value 50, when in fact based on their avg_row_length being reported as: Avg_row_length: 2257832 Your average row length is reported as: Avg_row_length = 20564 From: http://www.mysql.com/doc/en/CREATE_TABLE.html AVG_ROW_LENGTH An approximation of the average row length for your table. You only need to set this for large tables with variable size records. So if you are using a fixed length records, you don't need this.. otherwise. my best guess is use say: 22000 ?? If someone else has more experience with this issue, please toll in and make a suggestion based on the above values... On Thu, 4 Sep 2003, Keith Bussey wrote: Thanks I will make new tables and transfer the data over =) Just wodnering though, any advice on how to tell what to set AVG_ROW_LENGTH to ? -- Keith Bussey Wisol, Inc. Chief Technology Manager (514) 398-9994 ext.225 Quoting [EMAIL PROTECTED]: On Thu, 4 Sep 2003, Keith Bussey wrote: Running that shows me the following: mysql SHOW TABLE STATUS FROM email_tracking LIKE 'email_body_old'; ++++++-+-+--+---++-+-+-++-+ | Name | Type | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Create_options | Comment | ++++++-+-+--+---++-+-+-++-+ | email_body_old | MyISAM | Dynamic| 208853 | 20564 | 4294967292 | 4294967295 | 1820672 | 0 | 450438 | 2002-10-21 11:58:24 | 2003-08-31 01:01:41 | 2002-11-30 01:41:00 || | ++++++-+-+--+---++-+-+-++-+ 1 row in set (0.00 sec) There's your problem... your 3 bytes off your max_data_length .. which is giving the table full error Check: http://www.mysql.com/doc/en/Full_table.html at the bottom for help to alter table to keep growing -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Storing Java Objects
I'd be willing to bet if you implement serializable, serialize it and dump it to a binary column (blob) .. you should be able to restore... On Fri, 29 Aug 2003, Dennis Knol wrote: Hello, Is it possible to store Java objects in the mysql database? Kind regards, Dennis Get advanced SPAM filtering on Webmail or POP Mail ... Get Lycos Mail! http://login.mail.lycos.com/r/referral?aid=27005 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: It is secure to access MySQL thru internet?
We use a point to point VPN between server sites for this... so the security/encryption is totally transparent to mysql, it's just connecting to an IP address on tcp/3306 and the vpn appliances down the line deal with all the data security... There are cheaper solutions such as using freeswan, ssh port forwarding, etc.. But we have had great luck with vpn appliances which are getting very cheap nowdays... On Fri, 29 Aug 2003, Flavio Tobias wrote: I need to access a database thru internet. It is secure to do this using MySql? Thanks Flavio Tobias -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Telemobile auto-reply
There are several people with this auto-responder crap going on.. all of their emails should be de-listed in my opinion.. Here's my sendmail block list thusfar: [EMAIL PROTECTED]:/] cat /etc/mail/access | grep notify [EMAIL PROTECTED] REJECT #notify [EMAIL PROTECTED] REJECT #notify [EMAIL PROTECTED] REJECT #notify [EMAIL PROTECTED] REJECT #notify [EMAIL PROTECTED] REJECT #notify I'm sure there's more by now... On Tue, 26 Aug 2003, Yves Goergen wrote: Anyone else get one of these for every post to this list? yes, me too. i informed the domain's postmaster 3 days ago, but no error message nor reply came back yet. maybe we should just kick them off the list. (@list op) -- Yves Goergen [EMAIL PROTECTED] Please don't CC me (causes double mails) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: RAID or not?
I like using either raid 0+1.. it really cooks, or if you can'y spare the disks, raid 1 ...Something pushing that many queries, should probably be protected from disk failure. On Wed, 20 Aug 2003, Jackson Miller wrote: I am setting up a dedicated MySQL server with some pretty heavy usage. I am not much of a sys admin (mostly a programmer). I have some questions about the best drive configuration. I have 4 SCSI drives currently. I would like to have 1 drive run the OS, 1 drive to be the MySQL data directory and 1 drive to be InnoDB (possibly raw partition). What is the best way for me to configure RAID? Here is the kind of load I am talking about: Uptime: 1749850 Threads: 44 Questions: 1266402021 Slow queries: 16923 Opens: 162177 Flush tables: 1 Open tables: 64 Queries per second avg: 723.720 Thanks, -Jackson -- jackson miller cold feet creative 615.321.3300 / 800.595.4401 [EMAIL PROTECTED] cold feet presents Emma the world's easiest email marketing Learn more @ http://www.myemma.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: database reverted to 18hr old state after power outage
It depends if you had any kind of query logging enabled (binary or text) .. If you started safe_mysqld with -l (that's text logging of queries) .. or configured my.cnf with bin-log (that's binary logging).. You should be able to pipe/patch the logs against the database and let it run all the updates... I fixed all my errors by simply: 1. show tables 2. check table name1, name2, etc.. 3. any problems listed above 4. repair table name1 (repeat) On Wed, 20 Aug 2003, Brian Chan wrote: hi, the power outage in ontario caused my server to crash. after i booted up again it was as if the past 18 hrs never happened. all the changes to the database were gone. is there anyway to recover the missing data? it's probably too late now since new data has been put into the database since then... I now also have errors with my database. I run myisamchk /var/lib/mysql/*/*.MYI -e -o -r and I get many errors such as: Found block that points outside data file at 9464 or Found block with too small length at 980; Skipped Is there anyway I can resolve these errors? Thanks, Brian. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL and shared storage
I'm sure there's gonna be some file locking issues.. If you just trying to get some scalability, might want to look at replication instead, with the SAN hosting 1 copy of the database for each server.. On Tue, 19 Aug 2003, Scott Pippin wrote: I would like to set up a round robin cluster with mysql. The round robin would be done through the DNS to two different servers. Each of those servers would have mysql installed and each would reference the same data directory on our SAN. Are there any problems with doing this? Thanks in advance Scott Pippin [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Running MySQL in RAMDisk
Depends on db size... kinda risky putting it in memory if it's being updated and power goes bye-bye.. You should be able to get alot more performance just tuning my.cnf for a larger memory box.. On Mon, 18 Aug 2003, Creigh Shank wrote: Have a very large database and due to performance requirements it seems that running MySQL in RAMDisk might be a good solution. Access to the database is through PHP. Any suggestions or experiences? Creigh 305-541-1122 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Running MySQL in RAMDisk
If you can post your current my.cnf + box configuration I'm sure we can come up with some suggestions.. On Mon, 18 Aug 2003, Creigh Shank wrote: How would I tune my.cnf for a larger memory box? (Running on UPS; production machine(s) will go into Co-Lo with UPS and generator.) I realize there is some risk here. I am more concerned with the system failing and some kind of mirroring solution to protect data. Creigh At 10:34 AM 8/18/2003 -0400, you wrote: Depends on db size... kinda risky putting it in memory if it's being updated and power goes bye-bye.. You should be able to get alot more performance just tuning my.cnf for a larger memory box.. On Mon, 18 Aug 2003, Creigh Shank wrote: Have a very large database and due to performance requirements it seems that running MySQL in RAMDisk might be a good solution. Access to the database is through PHP. Any suggestions or experiences? Creigh 305-541-1122 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: images on database
Checkout: http://php.dreamwerx.net/forums/viewtopic.php?t=6 It's got streaming code.. What I do is if the type is unknown I always send application/octet-stream and the browser usually identifies it.. Or check the file extension for the file in the database, and apply content type based on that.. On Sun, 17 Aug 2003, [iso-8859-1] ÃåùñãéÜäçò Ãéþñãïò wrote: The most common problem in such cases is that the webserver doesn't know what MIME Type the file is and so it doesn't know how to send it. If you are using PHP to upload and later show that image on the web, i could give you the code for the scripts and the structure of the table containg the binary data. Luiz Rafael Culik Guimaraes said: Dear Friends I has images saved on an mysql table as an blob field, but i could not recreate the image as file by selecting an specific record of this field any ideia Regards Luiz -- If you don't like the news, go out and make some on your own. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: How to make Database access fast
best guess, you need some database maintence.. optimize table blah1, blah2, etc.. On Mon, 18 Aug 2003, Vinod Bhaskar wrote: Hi Friends, In my linux server (LOCAL), accessing data through the PHP scripts from MySQL (3.23.41)tables, it is taking more time. Earlier it was very fast. Now a days some reports it is taking more than five minuits. Earlier it use to come with in a minuit. What may the problem and what I have to do spead up the process. Regards, Vinod. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Very strange MySQL problem !
Possible? Are you sure all connection attempts fail? not just insert attempts? Server B does some updates/deletes... Chances are this causes some table locks, which makes Server A unable to perform it's inserts until Server B releases the lock. On Wed, 13 Aug 2003, Keith Bussey wrote: Hi, I am having a very weird problem with one of my database servers. Let's take the following setup: 10 load-balanced servers (I'll call them ServerA) 1 admin server (ServerB) 1 database server (DB1) Now ServerA has a function to insert a row into a table on DB1 whenever someone hits certain pages (for stats collection). There are probably about 1000 inserts being done per second. ServerB has a script that runs every 20 minutes, which does a very quick/simple select from DB1, then loops though the results and does updates/deletes on a different database server. Everytime the script on ServerB runs, all attempts to connect to DB1 from ServerA fail. This causes major problems like causing server load on ServerA to rise dramatically. I have checked when the script runs, everything on DB1 is normal. Server load, mysql processes, no locks, etc... I also have it logging the connection failures with mysql_error but mysql_error turns up blank. Can anyone try and help me figure out what's going on here!? Why won't it connect during the period the script runs and why is mysql_error blank? Webservers are FreeBSD 4.8-Stable, DB1 is Redhat 7.3 running MySQL-standard 4.0.13 Thanks!! -- Keith Bussey Wisol, Inc. Chief Technology Manager (514) 398-9994 ext.225 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySql Replication through a tunnel.
Consider using freeswan (http://www.freeswan.ca) to setup a VPN between the 2 servers.. that way you can replicate between tunnel addresses.. Or you can spend some cash and buy some vpn appliances.. On Thu, 7 Aug 2003, System wrote: Hello All, How will i setup Mysql Replication btween two redhat 7.3/9.0 boxes. I want the replication to happen through a Secure tunnel between these two. I am following the steps that is given on the offcial Mysql website. Just wants to know if there is any easy howto to set this up using a Tunnel. Any comments will be appreciated. Regards, Tina. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Very strange MySQL problem !
On Wed, 13 Aug 2003, Keith Bussey wrote: Are you sure all connection attempts fail? not just insert attempts? Yes, i have it write to my log if the sock is empty (mysql_connect fails)...before it gets to the insert But you mention mysql reports no connection errors... perhaps wait for an update, and from ServerA, use mysql client: mysql --user=asdf --password=asdf --host=DB1 see if you can get online.. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Very strange MySQL problem !
I'd double check this cronjob script... possible scenario.. kaibash idea if you can prove it's invalid. ServerB has a script that runs every 20 minutes, which does a very quick/simple select from DB1, then loops though the results and does updates/deletes on a different database server. running a select on a server with 1000 inserts a second will take alot longer than if the query had a table lock. Also if the query has anything to do with stats, etc, not having a read lock and inserts going on could affect results and result speed. Perhaps it aquires a read lock before running the query? I'd dig thru the code/sql / use show processlist to disprove this theory. On Wed, 13 Aug 2003, Keith Bussey wrote: That's the problem, when the script runs, I can connect from serverA fine!...but seems the webpages can't ;/ -- Keith Bussey Wisol, Inc. Chief Technology Manager (514) 398-9994 ext.225 Quoting [EMAIL PROTECTED]: On Wed, 13 Aug 2003, Keith Bussey wrote: Are you sure all connection attempts fail? not just insert attempts? Yes, i have it write to my log if the sock is empty (mysql_connect fails)...before it gets to the insert But you mention mysql reports no connection errors... perhaps wait for an update, and from ServerA, use mysql client: mysql --user=asdf --password=asdf --host=DB1 see if you can get online.. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: DB Performance - Celeron vs. P4
The fact that you have several millions of rows may indicate that you have an I/O problem, not CPU.. do some benchmarking. and perhaps the solution is going to (if not already) SCSI drives, or some kind of raid configuration (recommend raid 0+1) Or if you want to keep costs low.. perhaps using mysql built in virtual raid feature where the database spans the database over multiple ide drives (ideally on different channels).. I've got several celeron servers with u160 scsi raid, and they smoke! good luck! On Wed, 6 Aug 2003, Jonathan Hilgeman wrote: Currently, I have a Celeron 1.2 Ghz server with 512 RAM, and I'm considering moving to a P4 2 Ghz with the same amount of RAM. I have a few specific tables with several million rows of data, and it takes quite a long time to process that data on my current server. Does anyone have a good idea of the type of performance increase I'd see if I moved to a P4 server? I'm hoping to see a response like, Oh yeah - I moved to a P4 from a Celeron and operations that used to take 10 minutes now take 1 minute or less - all because MySQL has special options to take full advantage of the P4's power. Or something like that. fingers crossed - Jonathan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Please Help
I'd cross post to the mysql-java/jdbc mailing list... Most likely you need to modify mysql config to allow larger packet sizes.. search the list archive/website for max_allowed_packet info.. On Fri, 8 Aug 2003, Ma Mei wrote: Dear administrator, Now I have a quesion and want to get your help. When I insert an image file data (data size 64KB) to a BLOB field of MySQL database by com.mysql.jdbc.driber,there is error. Error message as follows: aq.executeQuery:Communication link failure:comm.mysql.jdbc.packetTooBigException The part of my program as follows: .. FileInputStream fis = new FileInputStream(untitl2.gif); .. conn = DriverManager.getConnection(jdbc:mysql://dbgserver.ihep.ac.cn/bsrf?user=bsrfpassword=bsrfuseUnicode=truecharacterEncoding=Gb2312); String ins =insert into myimg values(?,?); PreparedStatement stmt = conn.prepareStatement(ins); System.out.println(Test1*); stmt.setInt(1,1001); try { int len= fis.available(); System.out.println(len); stmt.setBinaryStream(2,fis,len); System.out.println(Test***2*); int rowsupdated = stmt.executeUpdate(); // When program run in here , it appears error. System.out.println(Test3*); System.out.println(RowsUpdated= +rowsupdated); } catch(IOException ex) { System.out.println(IOException:+ex.getMessage());} Could you help to relve this quesion as soon as. Thank you very much. I am looking forward ro hearing from you. Best Regards, Ma Mei --- Ma Mei Computing Center Institute of High Energy Physics P.O.Box 918 Ext.7 Beijing 100039 P.R. China Phone: (8610) 88235037 FAX: (8610) 88236839 E-Mail: [EMAIL PROTECTED] - -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: max clients
I'm not sure you'd want to do that way... Perhaps 5+ replicated boxes from a master that share the queries equally (hardware load balancer).. Might be cheaper in hardware than buying some heavy horsepower box.. On Thu, 31 Jul 2003, NEWMEDIAPLAN wrote: Can mysql handle 5000 concurrent webusers sending queries to the db through a web search engine. Is it possible (with a very big server/hw) ? Thanks. Roberto -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysql_history file?
it's like bash_history.. command history (used with up/back key mostly) You want to get rid of it for good: ln -sf /dev/null .mysql_history On Thu, 31 Jul 2003, Jean Hagen wrote: We're just getting started with MySQL on Linux; I was browsing my home directory and found a file called .mysql_history. It contains a history of all mysql commands I've entered - including logging on, setting passwords for users, etc. Sure enough, other users of mysql have the same file. Questions: What are these files, are they mandatory, or can I turn off this logging? Do we need to keep an eye on growth of this files, and truncate or archive them? Most importantly - yikes, seems scary to have a clear text file with a record of Mysql username and password create commands. Any thoughts on this?? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: max clients
Hopefully jeremyz will toll in.. he's probably hit it before ;) On Thu, 31 Jul 2003, NEWMEDIAPLAN wrote: I was considering different boxes. But I'm courious to know if anyone here knows the possibility we have with mysql... just to foresee the crash. Just a software matter assuming we have a very big server :) I saw max_connections and things like this. What's the real limit, tested? |-Original Message- |From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] |Sent: Thursday, July 31, 2003 8:40 PM |To: NEWMEDIAPLAN |Cc: [EMAIL PROTECTED] |Subject: Re: max clients | | |I'm not sure you'd want to do that way... Perhaps 5+ replicated boxes from |a master that share the queries equally (hardware load balancer).. Might |be cheaper in hardware than buying some heavy horsepower box.. | | | |On Thu, 31 Jul 2003, NEWMEDIAPLAN wrote: | | Can mysql handle 5000 concurrent webusers sending queries to the |db through | a web search engine. | Is it possible (with a very big server/hw) ? | Thanks. Roberto | | | | -- | MySQL General Mailing List | For list archives: http://lists.mysql.com/mysql | To unsubscribe: |http://lists.mysql.com/[EMAIL PROTECTED] | | -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: functions
Also keep in mind.. even if for example PHP was faster with certain functions.. Take that time + the time to pull the data from mysql and set it up for manipulation via PHP could be more than asking mysql to do all the work and just return a small resultset.. Just use a simple timer class to write 2 different scripts and time them.. On Sun, 27 Jul 2003, Doruk Fisek wrote: Hi, I am developing applications using PHP and MySQL. There are various functions in MySQL that is also implemented in PHP. Such as date arithmetic, string, numeric, etc... Do I get better performance if I do them in MySQL (or PHP) and why? Doruk -- FISEK INSTITUTE - http://www.fisek.org -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Log
I use an extraction layer between mysql and the application/db calls to handle this.. typically it's only enabled in development to work out any bugs.. Might want to look at something like that.. On Thu, 24 Jul 2003, Miguel Perez wrote: Hi: I have a question: does anyone know if exists a log in mysql to store only incorrect queries or queries that causes a code error in mysql. Greetings and thnx in advanced. _ MSN. Más Útil Cada Día http://www.msn.es/intmap/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: store video file into MySQL database server
1st search list for binary storage to read about pro/cons (been debated before) If you still want to do it either use load_file, or a loader application... I don't reccomend using longblob to store big files.. Use the 64k blob and inode/chunk the data.. better for streaming it out if that's what your planning to do.. example php code @ http://php.dreamwerx.net/forums/viewtopic.php?t=6 On Tue, 22 Jul 2003, Steven Wu wrote: Hi Need emergency help: I am doing some project use the MySQL to store the video file. The format of video is either avi or mpg. However I can not successfully insert the video file into the cooresponding field of a table by using the LOAD_FILE function. Does anyone know how to store video into MySQL database server ? Please help me, thank you in advance. The following is my table and SQL code. CREATE TABLE GAME ( GAME_ID INTEGER NOT NULL PRIMARY KEY, GAME_NAMEVARCHAR (20), VIDEO LONGBLOB ); INSERT INTO GAME GAME_ID, GAME_NAME, VIDEO VALUES(2, 'Termonator2', LOAD_FILE(/tmp/tm.mpg); My email is [EMAIL PROTECTED] Steven Wu -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: store video file into MySQL database server
If you actually get binary data loaded into your table, DO NOT run a select * from table command.. It will stream out all the data and it's not pretty.. instead run a select col1, col2, length(binarycol) .. On Tue, 22 Jul 2003, Steven Wu wrote: Hi Jeremy D. Zawodny: I did not get any error message. It hust said zero column change and the filed, VIDEO contains NULL data after I give SELECT * from GAME command. Steven Wu - Original Message - From: Jeremy Zawodny [EMAIL PROTECTED] To: Steven Wu [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Tuesday, July 22, 2003 12:12 PM Subject: Re: store video file into MySQL database server On Tue, Jul 22, 2003 at 12:03:44PM -0700, Steven Wu wrote: Hi Need emergency help: I am doing some project use the MySQL to store the video file. The format of video is either avi or mpg. However I can not successfully insert the video file into the cooresponding field of a table by using the LOAD_FILE function. Could you tell us what error message MySQL gives you? -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/ MySQL 4.0.13: up 12 days, processed 369,853,118 queries (335/sec. avg) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysql connection error
Keep in mind you need 2 things to happen: A) the mysql server has to be bound/listening on the ip/interface you need it to be. This is typically configured in my.cnf configuration file. B) you need to ensure the mysql privlidges will allow access from other hostnames/ip addresses. If mysql isn't listening on the required interface there is no a way tcp/ip connection can be established... It is very possible that the application running on the same server is using mysql sockets to connect to the sql server instead of using tcp/ip... Start with ensuring the server is listing tcp/ip on the correct interface.. If you can send us your my.cnf,it would help.. Unless mysql was perhaps configured without networking (I think that is a valid configure option if I'm not mistaken) On Thu, 17 Jul 2003, zafar rizvi wrote: hi No its not listening and giving same error ERROR 2013: Lost connection to MySQL server during query how can i allow a ip to connect mysql database from other system. plz explain me little (i have not yet idea to use mysql fully) Thanks ZafAr - Original Message - From: [EMAIL PROTECTED] To: zafar rizvi [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Thursday, July 17, 2003 10:11 AM Subject: Re: mysql connection error It says lost error during query.. but I'd be suspect if it even succesfully connected/authenticated.. I'd do a quick check first to ensure your mysql server is listening on the 192.xxx interface.. if it's unix, netstat -atn On Thu, 17 Jul 2003, zafar rizvi wrote: hi I am running one centralized mysql server, its runnig fine and conneded with localhost and his own ip very fine. But when i want to connect him from remote system or other lan system it give me errror like that. [EMAIL PROTECTED] root]# mysql -h 192.168.0.63 mysql ERROR 2013: Lost connection to MySQL server during query Anybody has idea what's wrong and where is ? I get no answer from searching on google with that error. Waiting for reply ThAnks ZaFAr -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Sun Cobalt Web Server RTX
If you have a compiler.. probably any version you want... I'm pretty sure I've compiled mysql on an older qube in the past (took forever).. Cobalt may have binary distribution packages available for download.. I'd check their site first.. may save you some time... On Thu, 17 Jul 2003, Clint S. Jones wrote: What type of MySql can I install on a cobalt web server??? Please help... not sure... Clint S. Jones Computer Teks, Inc. President 1520 East Sunrise Blvd. Suite: 202 Fort Lauderdale, FL 33304 (954) 465-5516 (954) 767-6150 fax [EMAIL PROTECTED] www.webingenuity.net http://www.webingenuity.net/ www.computerteks.net http://www.computerteks.net/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Java API to mySQL
Welcome to the word of JDBC ... mysql calls it connector/j @ http://www.mysql.com/products/connector-j/index.html On Thu, 17 Jul 2003, [iso-8859-1] kalika patil wrote: Hello I want to know if there is java API to mySQL like its available for C and C++. Bye Kalika SMS using the Yahoo! Messenger;Download latest version. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: too many connection error driving me nutz, need advice
Suggestion.. make a small script called closeall.php .. basically it has some code to force closed the mysql connection opened (be sure to run a close for all openened handles) I have seem some sites code that actually open multiple connections to the same database.. Add this file into php.ini under the script_prepend part (gets run after all other scripts are done) .. Perhaps something is getting left open/not closing properly? On Thu, 17 Jul 2003, Eben Goodman wrote: I run a site that gets a fair amount of traffic throughout the day. For days at a time the site runs smoothly, and then, at seemingly random intervals I get the mysql error Too many connections. The only solution for this is to stop the mysql service and restart it. I changed the max_connections variable to allow for 300 connections instead of the default 100. When I view processes when the error occurs the process list is totally flooded with mysql processes. I am using php to establish connections and using the mysql_connect() function, not mysql_pconnect. I used to use the pconnect function and it seemed to produce the Too many connections error more frequently. I'm at a bit of a loss with this, any tips on how to address this are greatly appreciated. thanks, Eben -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: too many connection error driving me nutz, need advice
On Thu, 17 Jul 2003, Eben Goodman wrote: This comment confuses me: I have seem some sites code that actually open multiple connections to the same database.. I have worked on some larger sites that 30+ past and current developers worked on.. Some good, some terrible.. The code typically starts up 1 database connection and all scripts should use it.. (Correct way) This code had programmers using things like DBLibraries, DBIMPTOOL, PHPLIB Database objects, etc.. to make more connections for their scripts/includes.. Scenario: 1) script starts - 1 connection opened for duration (this should be it!) 2) does some work 3) script gets included that different programmer wrote, they use DBLibrary object/etc to establish connection to database (current connections = 2) 4) the madness continues I've see as many as unique 5 connections to a DB to build 1 output page.. This is not good and required much cleanup.. But it is known to happen on larger codebases with many wacky developers If I am using php's mysql_connect() function, then every time a query is run this function gets run. Wouldn't this mean that for every query a new connection is opened to the same database? No mysql_connect opens a handle (connection) .. and all subsequent queries using mysql_query are routed thru it.. [EMAIL PROTECTED] wrote: Suggestion.. make a small script called closeall.php .. basically it has some code to force closed the mysql connection opened (be sure to run a close for all openened handles) I have seem some sites code that actually open multiple connections to the same database.. Add this file into php.ini under the script_prepend part (gets run after all other scripts are done) .. Perhaps something is getting left open/not closing properly? On Thu, 17 Jul 2003, Eben Goodman wrote: I run a site that gets a fair amount of traffic throughout the day. For days at a time the site runs smoothly, and then, at seemingly random intervals I get the mysql error Too many connections. The only solution for this is to stop the mysql service and restart it. I changed the max_connections variable to allow for 300 connections instead of the default 100. When I view processes when the error occurs the process list is totally flooded with mysql processes. I am using php to establish connections and using the mysql_connect() function, not mysql_pconnect. I used to use the pconnect function and it seemed to produce the Too many connections error more frequently. I'm at a bit of a loss with this, any tips on how to address this are greatly appreciated. thanks, Eben -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: first time to use MySQL on MS Platform with PHP.
Those are php notices (not warning or errors) .. change your ini error reporting settings.. you can either disable those messages or correctly initialize all variables before trying to use them. I'd suggest posting your msg to PHP list.. On Wed, 16 Jul 2003, Prabu Subroto wrote: Dear my friends... I am trying to develop a database application with PHP Version 4.3.2, MS Window 2000, MySQL 4.0.13-nt and Apache 2. I tried to insert a record onto my MySQL but I got this error messages. What do I have to defined to overcome this problem? Notice: Undefined variable: productid in C:\Programme\Apache Group\Apache2\htdocs\amt\cgi\productsave.php on line 15 Notice: Undefined variable: itemname in C:\Programme\Apache Group\Apache2\htdocs\amt\cgi\productsave.php on line 15 Notice: Undefined variable: description in C:\Programme\Apache Group\Apache2\htdocs\amt\cgi\productsave.php on line 15 Notice: Undefined variable: price in C:\Programme\Apache Group\Apache2\htdocs\amt\cgi\productsave.php on line 15 Notice: Undefined variable: information in C:\Programme\Apache Group\Apache2\htdocs\amt\cgi\productsave.php on line 16 Saved successfully Here is the code of productsave.php : ?php include(functions.php); $sql= insert into product ( productid, itemname, description, price, information ) values ('$productid', '$itemname', '$description', '$price', '$information') ; $conn=connection(); choosedb(); if (mysql_query($sql,$conn)){ echo Saved successfullybr; } else echo mysql_error(); mysql_close($conn); ? Here is the code of the form which calls productsave.php (it's cgi program): form action=cgi/productsave.php method=POST centerbuData Entry - Product List/u/b/center br table align=center border=0 trtd align=left1./tdtd align=leftProduct ID/tdtd align=left :/td td align=left input type=text name=productid readonly=yes/td/tr trtd align=left2./tdtd align=leftItem Name/tdtd align=left:/td td align=leftinput type=text name=itemname/td/tr trtd align=left3./tdtd align=leftDescription/tdtd align=left:/td td align=leftinput type=text name=description/td/tr trtd align=left 4./tdtd align=leftPrice/tdtd align=left:/td td align=leftinput type=text name=price/td/tr trtd align=left5./tdtd align=leftInformation/tdtd align=left:/td td align=leftinput type=text name=information/td/tr trtd colspan=4 align=centerinput type=submit value=Save src=img/ok.png input type=reset value=Clear src=img/no.png/td/tr /table /form Please tell me. Thank you very much in advance. __ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysql connection error
It says lost error during query.. but I'd be suspect if it even succesfully connected/authenticated.. I'd do a quick check first to ensure your mysql server is listening on the 192.xxx interface.. if it's unix, netstat -atn On Thu, 17 Jul 2003, zafar rizvi wrote: hi I am running one centralized mysql server, its runnig fine and conneded with localhost and his own ip very fine. But when i want to connect him from remote system or other lan system it give me errror like that. [EMAIL PROTECTED] root]# mysql -h 192.168.0.63 mysql ERROR 2013: Lost connection to MySQL server during query Anybody has idea what's wrong and where is ? I get no answer from searching on google with that error. Waiting for reply ThAnks ZaFAr -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: How can I display images from a mySQL Database in a web page?
THis is kinda offtopic.. it depends on what frontend you are using to access mysql (php,java,perl,etc) .. You just need to pull the binary data and output it with the correct http headers and it will show up in a browser.. search the list for more info. www.php4.com has an example using php.. good luck. On Tue, 15 Jul 2003, Dan Anderson wrote: I have created a BLOB field to store images. Is there any way to embed them within HTML with something like: image start: jpeg /image Thanks in advance, Dan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Recomended RAM for production server. 3Gb overboard?
Lord yea... don't get me wrong it would be nice.. but I'd start with say 1GB based on what your doing now.. just make sure the server has lots of slots open for future upgrades if required. (don't let them stick in 256MB sticks taking up 4 slots).. use larger size sticks to keep slots open.. On Thu, 10 Jul 2003, Brad Brad wrote: Hi, I'm looking at buying a new production server, as soon as i mentioned mysql the Dell salesmen started pushing 3-4Gb of ram, i'm not sure if this is excessive though. The OpenBSD server is 2.8Ghz and may have as many as 230 mysql sessions with 14 queries a second, the rest will be sleeping (ftp sessions maintain connection). The db directory is 80mb total, and the server is currently handling 14 requests/s with all queries being simple INSERT or SELECT's. The server also does low levels of firewall/ftp/http-perl/email. The old server is seems quite happy with 512mb. Do you guys think 3-4Gb is fair or excessive for this workload? Of course i wouldn't mind some headspace to grow. Unfortunatly i don't expect 6x increase any time soon though :) Thanks :) Brad. _ Få gode tilbud direkte i din mailbox http://jatak.msn.dk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Is it possible to dump images into a database?
take a look at: http://www.php4.com/forums/viewtopic.php?t=6 or search the mailing list archive.. there are plenty of threads talking about this: For list archives: http://lists.mysql.com/mysql On Wed, 9 Jul 2003, Dan Anderson wrote: Can anyone point me to a reference on how to insert images into a column in a mySQL database -- or is that not possible? Thanks in advance, Dan Anderson -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: NAS vs SAN for data directory
Avoid NAS... that's like dealing with mysql via NFS ... Internal raid depending on controller type and disk configuration could be faster than an external SAN... But chances are external SAN has alot more scalability as far as adding more controller, cabinets and disks... If it was my choice and you've had great success with SAN's before as mentioned .. stay the route... On Tue, 8 Jul 2003, SAQIB wrote: Hello All, I am trying to decide which storage I should use for the Data Directory. My application has lots of 'SELECT's (80%) and fewer UPDATEs/INSERTs(20 %). I have the following choices of Data storage 1) Xiotech SAN (66 Mhz FCAL) 2) NAS 3) Internal SCSI 3 RAID 5 Will I achieve any better performance using one storage vs the other storage? In the past (for about 2 years), I have I used SAN with 33Mhz FCAL, and the performance has been quite exceptional. And I have encountered no problems. Any suggestions? Thanks Saqib Ali http://www.xml-dev.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Optimal Disk Configuration
I just assumed your question was for mysql data only.. If you want total I use: OS - raid 1 (2 X 18.2gb - 10kRPM) DATA - raid 0+1 (# X 18.2gb - 15kRPM) usually a dataset is comprised of 6-10 disks.. you could go larger with the drive size.. but more spindles = more thruput Swap isn't much of an issue as memory is very inexpensive.. 2-4GB of ram in a machine configured correctly with mysql shouldn't use swap much.. On Sat, 5 Jul 2003, Jim McAtee wrote: That's it? The entire system - OS, swap space and data in a single array? Nothing more involved? [EMAIL PROTECTED] wrote: My favourite for dedicated db servers (good amount of data, but a ton of access, queries/sec) is raid 0+1 ... fast access, but requires a good number of disks.. Be sure to use a good raid controller, multiple channels for the disks if possible.. On Sat, 5 Jul 2003, Jim McAtee wrote: What would be the ideal RAID configuration for a dedicated MySQL db server running on FreeBSD? We're also running some MySQL databases on Windows 2000 Servers. What about the best configuration for a dedicated W2k server running MySQL? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Optimal Disk Configuration
My favourite for dedicated db servers (good amount of data, but a ton of access, queries/sec) is raid 0+1 ... fast access, but requires a good number of disks.. Be sure to use a good raid controller, multiple channels for the disks if possible.. On Sat, 5 Jul 2003, Jim McAtee wrote: What would be the ideal RAID configuration for a dedicated MySQL db server running on FreeBSD? We're also running some MySQL databases on Windows 2000 Servers. What about the best configuration for a dedicated W2k server running MySQL? Thanks, Jim -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Relocating the datadir
Just shutdown mysql, move the data and create a symlink.. or start safe_mysqld with --datadir= option On Mon, 30 Jun 2003, Claudio Alonso wrote: Hi, I'd like to know if there's a way to change where the datadir is located. I've installed mysql in a Sun Solaris, in the /usr/local/mysql directory, but I want the databases to be located in another filesystem. Is there a way to relocate them? Thanks in advance, --Claudio _ Charla con tus amigos en línea mediante MSN Messenger: http://messenger.yupimsn.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: inserting images to mysql - is it possible?
If you want to store images in the database, use a blob columntype.. And take a look at this example alot of people have based mysql binary storage off: http://www.php4.com/forums/viewtopic.php?t=6 good luck On Sun, 29 Jun 2003, Digital Directory USA wrote: I am new to php and mysql, I have searched looked over and cannot find a way to do this... can it be done? I have set up the following in mysql. How do I upload the product images to the pPhoto column. Any suggestions? ds_produc Field Type Attributes Null Default Extra pID int(11)Noauto_increment catID int(11)No 0 pName varchar(70)Yes NULL pNum varchar(40)Yes NULL pTeaser textNo pDesc textYes NULL pPrice float(10,2)Yes NULL pSalesPrice float(10,2)No 0.00 pSale char(1)No pPhoto varchar(70)Yes NULL pPhotoWidth int(11)No 0 pPhotoHeight int(11)No 0 pSize varchar(35)Yes NULL pAvail char(1)No Y George Guebely Digital Directory USA, Inc -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Replication for Remote
Shouldn't be a problem.. if you already have a private link between sites your set, if not drop in a vpn solution to ensure end to end security.. The only main difference betten your situation and most (all servers are feet apart at LAN speed) is the WAN thruput/etc.. Mysql should keep re-trying to download updates if it fails due to communications failure or anything like that.. On Mon, 30 Jun 2003, Rohan Hawthorne wrote: We work in a largely distributed environment, with servers in the main server room being thousands of kilometers (that's about 3/4 of a mile) from clients. We are hoping to set up a replication system so that select queries go to the local server (slave) at the client and update queries go to the server (master) at the main server room. Has anyone else researched this idea that may want to discuss ? We are using bundled mysql/linux/php/apache servers. ICT Helpdesk: 8939 7116 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Best Pratices for mySQL Backups in Enterprise
I'd instead setup a 2nd backup server that's a slave to the master, replicates all the time, keeps in sync. At X time, stop replication/mysql, backup data to tape .. restart mysql and it will catch up/re sync back to master.. On Thu, 26 Jun 2003, SAQIB wrote: mysqlhotcopy does your locking for you. So running ./mysqlhostcopy dbase /path/to/backup/dir is perfectly safe while database operations (selct, insert, update etc) are being performed? --- Saqib Ali http://www.xml-dev.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: index failure, cannot generate.
I'm guessing blob data? ~1500MB / 400rows = ~3.75MB /row On Fri, 6 Jun 2003, Jeremy Zawodny wrote: On Fri, Jun 06, 2003 at 09:36:08AM +0200, H M Kunzmann wrote: Hi all. I am running RH9.0 with MySQL 4.0.13 I am trying to create a fulltext index on a 1.5GB table with 400 records. I'm curious. What's the structure for your 1.5GB table with only 400 records look like? Jeremy -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/ MySQL 4.0.13: up 3 days, processed 118,789,190 queries (395/sec. avg) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Speeding up copy to tmp table
So you have no redundancy? 5 arrays of raid 0 (2 disks each) = lose a disk and your pooched.. suggestion: reconfigure to raid 0+1 (more than 2 disks a set) for added perf ? On Fri, 6 Jun 2003, Sam Jumper wrote: What steps can be taken to speed up queries that show state=copy to tmp table in show processlist? I am running MySql on a dual Xeon system running Windows 2000 server with 10 10K rpm drives in 5 raid 0 arrays and plenty of ram that isn't being utilized. Sam -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: optimize entire db
I've used a simply shell script in the past.. run from cron to do it... I just see someone posted a perl solution.. I've used a php one aswell.. #!/bin/sh # DB OPTIMIZE SCRIPT - !! WARNING, DOES TABLE LOCKING DURING OPTIMIZES user=root pass=secret host=10.1.1.1 db=mydb [EMAIL PROTECTED] mysql=/usr/local/bin/mysql ## BUILD TABLE LIST flag=0 for table in \ `echo show tables|$mysql --host=$host --user=$user --password=$pass $db | tail +2` ; \ do \ if test $flag == 1 then dblist=$dblist, fi dblist=$dblist $table flag=1 done ## RUN OPTIMIZE echo optimize table $dblist|$mysql --host=$host --user=$user --password=$pass $db /tmp/db_optreport ; \ mail -s Nightly DB Optimize $report /tmp/db_optreport rm /tmp/db_optreport On Tue, 3 Jun 2003, Mark Stringham wrote: Hi - Can anyone give the command line syntax for optimizing the entire db? thanks in advance. MS -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: How to fill an Blob with binary data
Search the mailing list archives for this... There is a link to this article: http://www.php4.com/forums/viewtopic.php?t=6 I wonder if the mailinglist search was powered by google more people would use it? On Tue, 27 May 2003, Thomas Hoelsken wrote: Hi, I would like to fill an Blob with binary data thru SQL-statements or any other MySQL tools and don't know how!? Are there suggestions? Thanks, Thomas Hoelsken -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: How to fill an Blob with binary data
Mascon can do it .. it's a win32 app.. On Tue, 27 May 2003, Thomas Hoelsken wrote: Hi, isn't there any other solution instead of using PHP just for filling an Blob!? I don't need php and would prefer any other way! Thanks, Thomas -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 27, 2003 6:56 PM To: [EMAIL PROTECTED] Subject: Re: How to fill an Blob with binary data Search the mailing list archives for this... There is a link to this article: http://www.php4.com/forums/viewtopic.php?t=6 I wonder if the mailinglist search was powered by google more people would use it? On Tue, 27 May 2003, Thomas Hoelsken wrote: Hi, I would like to fill an Blob with binary data thru SQL-statements or any other MySQL tools and don't know how!? Are there suggestions? Thanks, Thomas Hoelsken -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: How to fill an Blob with binary data
Depending on size of data there are a few different methods... Just like what most people do.. use plain insert statements with the data properly escaped and shouldn't have any problem going in. Pulling data out is pretty quick .. I can stream binary data out of my mysql storage servers via our ftp gateway at speeds about 40MB/sec (~ 4000k/sec) which is pretty quick for the crappy development server I've got.. You mention streaming in.. I've got a ftpgateway to mysql storage that I use.. it's written in java.. but you could also implement one in C++ On Tue, 27 May 2003, Adam Clauss wrote: Copied from my original message: I am writing a database that will contain a blob field for some binary data. My question is, what is the most efficient way to load this binary data in? I could turn it into a string and pass it into an INSERT/UPDATE statement, but I am afraid that problems will arise when it is converted into a string (with newlines, nulls, etc). I considered saving it to a file, and then using LOAD_FILE, but: a) that ALSO invovles turning it into a string, but since MySQL is doing it, it might end up OK. b) I'm not sure how the performance will be when I have to save it to a file, then have MySQL read it back in. The app is using the C++ API and I was wondering if there was any other way to directly 'stream' the data in? Thanks, Adam Clauss [EMAIL PROTECTED] -Original Message- From: Mike Hillyer [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 27, 2003 12:31 PM To: Adam Clauss; [EMAIL PROTECTED] Subject: RE: How to fill an Blob with binary data In what language? Mike Hillyer www.vbmysql.com -Original Message- From: Adam Clauss [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 27, 2003 11:30 AM To: Mike Hillyer; [EMAIL PROTECTED] Subject: RE: How to fill an Blob with binary data In my case, its neither. Some random binary data. Could be ANY kind of file... (I'm doing mine programmatically). Adam Clauss [EMAIL PROTECTED] -Original Message- From: Mike Hillyer [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 27, 2003 12:23 PM To: Adam Clauss; [EMAIL PROTECTED] Subject: RE: How to fill an Blob with binary data Like I said, for Image and text blobs, use MyCC and it should work. Mike Hillyer www.vbmysql.com -Original Message- From: Adam Clauss [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 27, 2003 11:18 AM To: [EMAIL PROTECTED] Subject: RE: How to fill an Blob with binary data I am in need of help with the same question (sent late last night, but apparently got passed over). Adam Clauss [EMAIL PROTECTED] -Original Message- From: Thomas Hoelsken [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 27, 2003 12:11 PM To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: RE: How to fill an Blob with binary data Hi, isn't there any other solution instead of using PHP just for filling an Blob!? I don't need php and would prefer any other way! Thanks, Thomas -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 27, 2003 6:56 PM To: [EMAIL PROTECTED] Subject: Re: How to fill an Blob with binary data Search the mailing list archives for this... There is a link to this article: http://www.php4.com/forums/viewtopic.php?t=6 I wonder if the mailinglist search was powered by google more people would use it? On Tue, 27 May 2003, Thomas Hoelsken wrote: Hi, I would like to fill an Blob with binary data thru SQL-statements or any other MySQL tools and don't know how!? Are there suggestions? Thanks, Thomas Hoelsken -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]