Rick is dead on correct, I call I chunking blob data.. There is an
article here on a simple implementation:
http://www.dreamwerx.net/phpforum/?id=1
I've had hundreds of thousands of files in this type of storage before
with no issues.
On Tue, 3 Jul 2007, Rick James wrote:
I gave up on
Interesting, never tried compressing the data, sounds like that might be
a nice addon.. Do you have any performance numbers you can share? I posted
some performance numbers on one of my implementations some time ago.
I found the thread here:
http://lists.mysql.com/mysql/206337
On Tue, 3
I would love to see an implementation with 1 row for large data that works
well. The main issues I had were that mysql has a default max packet size
limit (I think it used to be like 16MB (mysql 3.23) - 1GB Mysql 4 - Not
sure v5. Alot of people don't have control over those settings in their
I don't feel the implementation direction this article takes is good. It
uses single row binary storage, which anyone who has had to deal with
large files knows is a definate issue.
On Sat, 21 Apr 2007, Kevin Waterson wrote:
This one time, at band camp, Michael Higgins [EMAIL PROTECTED]
Here's a good php implementation, you can implement the concept in any
language you like:
http://www.dreamwerx.net/phpforum/?id=1
On Fri, 20 Apr 2007, Michael Higgins wrote:
Hello, all --
I want to set up a database for document storage. I've never worked with
binary files stored in
Here's a great article on how to store pdf/whatever binary as blob chunks:
http://www.dreamwerx.net/phpforum/?id=1
On Wed, 7 Mar 2007, Jay Pipes wrote:
Ed wrote:
Hi All,
I'm trying to figure out how to put a pdf file into a blob field.
I guess a pdf file is a binnary file and it
I have to disagree with most, I would store the entire file in the
database, metadata and all. Better security, if you have a backend
database, it's much harder to get the data than pdf's sitting in a
directory on the webserver. Plus if you ever want to scale to a
multi-webserver environment,
I've built systems than stream tons of data via this method, at times
into some impressive requests per second. Also I've exposed files stored
in this manner via a ftp interface with servers able to deliver near wire
speed data in and out of the db storage.
When your into a load balanced
Don't store binary data in large blobs - You should instead chunk your
data for better performance and no packet limitation issues.
Good implementation article at: http://www.dreamwerx.net/phpforum/?id=1
On Mon, 5 Feb 2007, abhishek jain wrote:
On 2/3/07, abhishek jain [EMAIL PROTECTED]
Check for .err text log files .. they are probably in
/opt/mysql/mysql/data/
called servername.err
servername is the hostname of your box.
On Thu, 28 Dec 2006, Jeff Jones wrote:
Hi!
I'm a rookie, so bear with me...
Keep getting:
Starting mysqld daemon with databases from
Sounds more like it's setup on a SAN.. a NAS is a different type of unit
like a NetApp filer.
I'd have to agree with the other poster, I'm not sure your current config
is valid.
A more typical setup would be that both boxes should have their own
unique SAN partitions, and a high speed network
If your storing files in mysql, it's best to chunk/shard your data if your
not doing so already. Example article/code at:
http://www.dreamwerx.net/phpforum/?id=1
On Thu, 16 Nov 2006, Shain Lee wrote:
Hi ,
I wanted to store images , musics, videos ..etc in mysql database.storing
Sure.. checkout this article:
http://php.dreamwerx.net/forums/viewtopic.php?t=6
Very fast mysql storage implementation in PHP, port the design to
whatever lanaugage suits you.
On Wed, 21 Apr 2004, adrian Greeman wrote:
Please excuse a very simple inquiry from a near beginner
If I wish to
On Fri, 12 Mar 2004, Jigal van Hemert wrote:
I've been reading this thread, but I can't see the advantages of storing
files in a database.
I've always had the impression that a *file* system was the appropriate
place to store files.
Scalability, searching, security.. just a few..
I store any kind of files, PDF/word/etc.. I just like not having
lots of directories with 1000's of files each in them... Seems
more organized to me..
On Thu, 11 Mar 2004, Erich Beyrent wrote:
Use the BLOB, Luke!
See your local MySQL manual for details.
We're using BLOBs to store PDF
It does make the database larger.. as far as overhead... As you
can't just store the file as a blob.. You'll need some referencing data in order to
find it, and restore it back out of the database..
I just checked out my database (100's of files) which has:
Total file size: 1765.34MB
Mysql
http://php.dreamwerx.net/forums/viewtopic.php?t=6
storage implementation that is not affected by max_packet_size.
On Thu, 11 Mar 2004, Tomas Zvala wrote:
Hello,
I run into a problem where I need to get contents of BLOB to my php
script. I found out that I'm limited by max_packet_size
Check this article:
http://php.dreamwerx.net/forums/viewtopic.php?t=6
Port code/design to perl or whatever client language you want.. mysql
could care less once it's got the data (correctly)
On Tue, 9 Mar 2004, Isa Wolt wrote:
Hi,
I would like to save a binary file into a mysql database,
If your infact (sounds like) storing the pictures meta-data (name, size,
owner, etc) and the data (blob of some kind) .. I would definately break
up the design into 2 tables. That way when dealing with the meta-data
table (your RAND() query) there is much less data that needs to be
traversed to
Donny, what do you do? Throw all the values into an array or something
on the client side, and use a random number generator to pull out the
array elements?
I suppose (depending on resultset size) pulling that many rows from server
to client and handing on client side could be faster...
On
I'd go with raid 1+0 ... Be a shame to have that much cpu power and become
I/O bound.. This way you've got 4 disks feeding the cpu's instead of 2..
Better performance than raid 5, and only 2 more disks than your current
config.
On Tue, 2 Mar 2004 [EMAIL PROTECTED] wrote:
I have a
Never think enough is enough.. Current operation levels can easily be pushed many
times their current level/ratio in a short matter of time, and databases
can grow rapidly (Even tho it's not identified here)
I have spec'd boxes before based on someone reccomendations for load, and
then found 2
Are you just copying the files? I'd suggest using mysqldump if you are
not already..
On Tue, 10 Feb 2004, Scott Purcell wrote:
Hello,
I am running a DB on a machine in which I am developing on. Then I have been copying
the contents of ~mysql/data/databasename to another box where I am
By default mysqldump just dumps to stdout.. so you need to (pipe) it to
a textfile.. the correct syntax, you need to put the database name after
the username/password (after the mysql options)
.. give that a shot.. most likely all the garbage/output wacked out your
session..
On Tue, 10 Feb
Read this article for it's design on database storage.. I have several
big implementations on this design which are fast, reliable and scalable
(one of them in the medical field aswell)
http://php.dreamwerx.net/forums/viewtopic.php?t=6
On Sun, 1 Feb 2004, Yuri Oleynikov wrote:
Hi everyone,
This may have been mentioned.. I have not been recieving message for 12+
hours now.. And it appears:
Jan 13 01:23:49 cyclone tcplog: smtp connection attempt from 213.136.52.31
Jan 13 01:23:50 cyclone sendmail[10674]: ruleset=check_relay,
arg1=lists2.mysql.com, arg2=213.136.52.31,
This article discusses it briefly:
http://php.dreamwerx.net/forums/viewtopic.php?t=6
I am using this type of design/technology for quite a few clients. Some
storing gigs and gigs of data (images, documents, pdf, anything) over
multiple servers.
The scalability and performance of a well designed
Yes and no.. mySQL itself cannot do this.. If you need to keep growing in
size (on 1 server) you may want to look at some kind of LVM disk array/SAN
you can keep plugging in disks and extending the volume..
I do kinda of what you are looking for with 1 application, but it is all
software
This page has sample article/code how to store any type/size of file in
mysql.. Depending on the appliation it could be a good idea (such as
revision control or something)
http://php.dreamwerx.net/forums/viewtopic.php?t=6
On Fri, 12 Dec 2003 [EMAIL PROTECTED] wrote:
I am working with a
I'd agree with chris. I've got a ton of data/files in mysql for years
now and no problems... The thruput in/out is increadible if you implement
the storage handler correctly.
Plus it gives you certain advantages such as security/scalability/etc...
With storing the files on disk, the files
16MB? you mean the max packet per query limit? If your storing data in
huge/large blob then you are making a big mistake in my opinion and taking
a huge performance hit... I've got files over 1GB in size in mysql now..
they went in and out at almost filesystem speed...
On Sun, 14 Dec
True initially... What I've done is use a java appserver frontend (orion)
that's a caching server.. It gets the request, checks if it has the image
in it's memory cache, if so serves it, otherwise goes to the backend and
gets it, stores in memory cache, serves it..
Very fast and aleviates alot
This is the article for you:
http://php.dreamwerx.net/forums/viewtopic.php?t=6
Shows how to store large files in database... I've currently got gigs and
gigs of files in mysql using this method..
On Tue, 2 Dec 2003, Jim Kutter wrote:
Hi folks.
I'm storing files in a BLOB table for a number
Might just create a common table that stores messages back and forth.. it
stores sender id, recipent, message, etc..
each server polls the table ever so often (cronjob) for messages for it
and processes them, removing them from the queue.. it's like a simple
message broker..
On Tue, 2 Dec
Be warned about hitting the default max_packet_size limitation of mysql
which will cause large files to not insert.
This link shows another way to overcome that limitation:
http://php.dreamwerx.net/forums/viewtopic.php?t=6
On Mon, 1 Dec 2003, Mickael Bailly wrote:
Here is a sample code in
Maybe look at using a HEAP table? Load it on startup from a datasource..
On Wed, 5 Nov 2003, Arnoldus Th.J. Koeleman wrote:
I have a large table which I like to store into memory .
Table looks like
Spid_1__0
(recordname varchar(20) primary key,
data blob not null
)
http://php.dreamwerx.net/forums/viewtopic.php?t=6
This code has served millions of binary objects (pics, files, etc)
for me with no problems.. good luck.
On Tue, 1 Jan 2002, Braulio wrote:
What is the best method to use to include pictures in tables? I am using
PHP. I have several
I usually use ps.setBytes() and pass it a byte[] array ..
On Wed, 22 Oct 2003, Scott Purcell wrote:
Hello,
I have opted to insert some small jpg files into the mysql database using java.
Code below.
When I do a select from the table using the mysql command line, it generates
pages of
Checkout http://php.dreamwerx.net/forums/viewtopic.php?t=6
For a PHP example you could easily convert to PERL or just install PHP
standalone binary on the box.
On Fri, 3 Oct 2003, Zafar wrote:
Hello
Having trouble inserting images into a BLOB column. No problems doing
this 'one at a time'
Might instead want to look at
where fooid in (xx, xx, xx, xx)
On Sat, 4 Oct 2003, Marc Slemko wrote:
If I do a query such as:
SELECT * from foo where fooid = 10 or fooid = 20 or fooid = 03 ...
with a total of around 1900 or fooid = parts on a given table with 500k
rows, it takes about
Any mysql encryption functions would be done server side ofcourse before
putting it into the database.. I'd just incorporate an de/encryption
scheme into your client app, and insert as standard BLOB string to remote
server.
On Sat, 4 Oct 2003, sian_choon wrote:
Hi,
I have the question
Might try mytop (search google for it) .. jeremy z wrote it.. it works
well for realtime monitoring..
On Tue, 23 Sep 2003, John May wrote:
Is there any way to monitor which databases are being used the most
heavily on a MySQL server? Thanks for any info!
- John
--
Most likely you'd need to do some datatype mapping changes to the
script...
Everyone I know who's had to do this has typically used something like
sqlyog (search google) and used the ODBC import cabability to transfer
data from MSSQL - MySQL..
On Thu, 18 Sep 2003, Tormod Halvorsen wrote:
Hi
I'm pretty sure you need to sync the entire database (all tables) to all
slaves before starting replication..Your servers are technically
already out of sync..
And no wonder it crashes, tables are missing in it's view..You need to
hit the initial replication setup manual pages..
On
From mysql manual:
If you want to create the resulting file on some other host than the
server host, you can't use SELECT ... INTO OUTFILE. In this case you
should instead use some client program like mysqldump --tab or mysql -e
SELECT ... outfile to generate the file
On Fri, 12 Sep 2003,
Are you running linux and is it SMP? Kernel version plz..
On Wed, 10 Sep 2003, Dathan Vance Pattishall wrote:
/usr/local/mysql/bin/perror 127
Error code 127: Unknown error 127
127 = Record-file is crashed
I've been getting this allot lately from mysql-3.23.54-57.
Things that are not
Most likely it's the 4GB OS limitation... My suggestion is to create a
new table using mysql's built in raid option... span the table over
multiple files to allow of much larger table growth...
migrate all the rows over to the new spanned table..
On Thu, 4 Sep 2003, Keith C. Ivey wrote:
the email_body one (the
one with the problem) as only 2:
an ID autonumber field, and a text field.
Perhaps there is some bug/limitation in Mysql whereby a field can only have so
much size ??
--
Keith Bussey
Wisol, Inc.
Chief Technology Manager
(514) 398-9994 ext.225
Quoting Colbey
On Thu, 4 Sep 2003, Keith Bussey wrote:
Running that shows me the following:
mysql SHOW TABLE STATUS FROM email_tracking LIKE 'email_body_old';
I'm not too familiar with this.. someone else today used the value 50,
when in fact based on their avg_row_length being reported as:
Avg_row_length: 2257832
Your average row length is reported as:
Avg_row_length = 20564
From: http://www.mysql.com/doc/en/CREATE_TABLE.html
AVG_ROW_LENGTH
I'd be willing to bet if you implement serializable, serialize it and dump
it to a binary column (blob) .. you should be able to restore...
On Fri, 29 Aug 2003, Dennis Knol wrote:
Hello,
Is it possible to store Java objects in the mysql database?
Kind regards,
Dennis
We use a point to point VPN between server sites for this... so the
security/encryption is totally transparent to mysql, it's just connecting
to an IP address on tcp/3306 and the vpn appliances down the line deal
with all the data security...
There are cheaper solutions such as using freeswan,
There are several people with this auto-responder crap going on.. all of
their emails should be de-listed in my opinion..
Here's my sendmail block list thusfar:
[EMAIL PROTECTED]:/] cat /etc/mail/access | grep notify
[EMAIL PROTECTED] REJECT #notify
[EMAIL PROTECTED] REJECT #notify
I like using either raid 0+1.. it really cooks, or if you can'y spare the
disks, raid 1 ...Something pushing that many queries, should probably
be protected from disk failure.
On Wed, 20 Aug 2003, Jackson Miller wrote:
I am setting up a dedicated MySQL server with some pretty heavy usage.
It depends if you had any kind of query logging enabled (binary or text)
.. If you started safe_mysqld with -l (that's text logging of queries) ..
or configured my.cnf with bin-log (that's binary logging)..
You should be able to pipe/patch the logs against the database and let it
run all the
I'm sure there's gonna be some file locking issues.. If you just trying to
get some scalability, might want to look at replication instead, with the
SAN hosting 1 copy of the database for each server..
On Tue, 19 Aug 2003, Scott Pippin wrote:
I would like to set up a round robin cluster with
Depends on db size... kinda risky putting it in memory if it's being
updated and power goes bye-bye..
You should be able to get alot more performance just tuning my.cnf for a
larger memory box..
On Mon, 18 Aug 2003, Creigh Shank wrote:
Have a very large database and due to performance
If you can post your current my.cnf + box configuration I'm sure we can
come up with some suggestions..
On Mon, 18 Aug 2003, Creigh Shank wrote:
How would I tune my.cnf for a larger memory box? (Running on UPS;
production machine(s) will go into Co-Lo with UPS and generator.) I
realize
Checkout: http://php.dreamwerx.net/forums/viewtopic.php?t=6
It's got streaming code..
What I do is if the type is unknown I always send application/octet-stream
and the browser usually identifies it..
Or check the file extension for the file in the database, and apply
content type based on
best guess, you need some database maintence..
optimize table blah1, blah2, etc..
On Mon, 18 Aug 2003, Vinod Bhaskar wrote:
Hi Friends,
In my linux server (LOCAL), accessing data through the PHP scripts from
MySQL (3.23.41)tables, it is taking more time. Earlier it was very fast.
Possible?
Are you sure all connection attempts fail? not just insert attempts?
Server B does some updates/deletes... Chances are this causes some table
locks, which makes Server A unable to perform it's inserts until Server B
releases the lock.
On Wed, 13 Aug 2003, Keith Bussey wrote:
Hi,
Consider using freeswan (http://www.freeswan.ca) to setup a VPN between
the 2 servers.. that way you can replicate between tunnel addresses..
Or you can spend some cash and buy some vpn appliances..
On Thu, 7 Aug 2003, System wrote:
Hello All,
How will i setup Mysql Replication btween two
On Wed, 13 Aug 2003, Keith Bussey wrote:
Are you sure all connection attempts fail? not just insert attempts?
Yes, i have it write to my log if the sock is empty (mysql_connect
fails)...before it gets to the insert
But you mention mysql reports no connection errors... perhaps wait for an
I'd double check this cronjob script... possible scenario.. kaibash idea
if you can prove it's invalid.
ServerB has a script that runs every 20 minutes, which does a very
quick/simple select from DB1, then loops though the results and does
updates/deletes on a different database server.
The fact that you have several millions of rows may indicate that you
have an I/O problem, not CPU.. do some benchmarking. and perhaps the
solution is going to (if not already) SCSI drives, or some kind of raid
configuration (recommend raid 0+1)
Or if you want to keep costs low.. perhaps using
I'd cross post to the mysql-java/jdbc mailing list... Most likely you
need to modify mysql config to allow larger packet sizes.. search the
list archive/website for max_allowed_packet info..
On Fri, 8 Aug 2003, Ma Mei wrote:
Dear administrator,
Now I have a quesion and want to get your help.
I'm not sure you'd want to do that way... Perhaps 5+ replicated boxes from
a master that share the queries equally (hardware load balancer).. Might
be cheaper in hardware than buying some heavy horsepower box..
On Thu, 31 Jul 2003, NEWMEDIAPLAN wrote:
Can mysql handle 5000 concurrent
it's like bash_history.. command history (used with up/back key mostly)
You want to get rid of it for good: ln -sf /dev/null .mysql_history
On Thu, 31 Jul 2003, Jean Hagen wrote:
We're just getting started with MySQL on Linux; I was browsing my home
directory and found a file called
Hopefully jeremyz will toll in.. he's probably hit it before ;)
On Thu, 31 Jul 2003, NEWMEDIAPLAN wrote:
I was considering different boxes. But I'm courious to know if
anyone here knows the possibility we have with mysql... just to foresee the
crash.
Just a software matter assuming we
Also keep in mind.. even if for example PHP was faster with certain
functions.. Take that time + the time to pull the data from mysql and set
it up for manipulation via PHP could be more than asking mysql to do all
the work and just return a small resultset..
Just use a simple timer class to
I use an extraction layer between mysql and the application/db calls to
handle this.. typically it's only enabled in development to work out any bugs..
Might want to look at something like that..
On Thu, 24 Jul 2003, Miguel Perez wrote:
Hi:
I have a question: does anyone know if exists a log
1st search list for binary storage to read about pro/cons (been debated
before)
If you still want to do it either use load_file, or a loader
application...
I don't reccomend using longblob to store big files.. Use the 64k blob
and inode/chunk the data.. better for streaming it out if that's
If you actually get binary data loaded into your table, DO NOT run a
select * from table command.. It will stream out all the data and it's
not pretty..
instead run a select col1, col2, length(binarycol) ..
On Tue, 22 Jul 2003, Steven Wu wrote:
Hi Jeremy D. Zawodny:
I did not get any
Keep in mind you need 2 things to happen:
A) the mysql server has to be bound/listening on the ip/interface you need
it to be. This is typically configured in my.cnf configuration file.
B) you need to ensure the mysql privlidges will allow access from other
hostnames/ip addresses.
If mysql
If you have a compiler.. probably any version you want... I'm pretty sure
I've compiled mysql on an older qube in the past (took forever)..
Cobalt may have binary distribution packages available for download.. I'd
check their site first.. may save you some time...
On Thu, 17 Jul 2003, Clint
Welcome to the word of JDBC ... mysql calls it connector/j @
http://www.mysql.com/products/connector-j/index.html
On Thu, 17 Jul 2003, [iso-8859-1] kalika patil wrote:
Hello
I want to know if there is java API to mySQL like its available for C and C++.
Bye
Kalika
SMS using the Yahoo!
Suggestion.. make a small script called closeall.php .. basically it has
some code to force closed the mysql connection opened (be sure to run a
close for all openened handles)
I have seem some sites code that actually open multiple connections to the
same database..
Add this file into php.ini
On Thu, 17 Jul 2003, Eben Goodman wrote:
This comment confuses me:
I have seem some sites code that actually open multiple connections to the
same database..
I have worked on some larger sites that 30+ past and current
developers worked on.. Some good, some terrible.. The code typically
Those are php notices (not warning or errors) .. change your ini error
reporting settings.. you can either disable those messages or correctly
initialize all variables before trying to use them.
I'd suggest posting your msg to PHP list..
On Wed, 16 Jul 2003, Prabu Subroto wrote:
Dear my
It says lost error during query.. but I'd be suspect if it even
succesfully connected/authenticated..
I'd do a quick check first to ensure your mysql server is listening on the
192.xxx interface.. if it's unix, netstat -atn
On Thu, 17 Jul 2003, zafar rizvi wrote:
hi
I am running one
THis is kinda offtopic.. it depends on what frontend you are using to
access mysql (php,java,perl,etc) ..
You just need to pull the binary data and output it with the correct http
headers and it will show up in a browser.. search the list for more
info. www.php4.com has an example using php..
Lord yea... don't get me wrong it would be nice.. but I'd start with say
1GB based on what your doing now.. just make sure the server has lots of
slots open for future upgrades if required. (don't let them stick in 256MB
sticks taking up 4 slots).. use larger size sticks to keep slots open..
take a look at:
http://www.php4.com/forums/viewtopic.php?t=6
or search the mailing list archive.. there are plenty of threads talking
about this:
For list archives: http://lists.mysql.com/mysql
On Wed, 9 Jul 2003, Dan Anderson wrote:
Can anyone point me to a reference on how to insert
Avoid NAS... that's like dealing with mysql via NFS ...
Internal raid depending on controller type and disk configuration could be
faster than an external SAN... But chances are external SAN has alot more
scalability as far as adding more controller, cabinets and disks...
If it was my choice
I just assumed your question was for mysql data only..
If you want total I use:
OS - raid 1 (2 X 18.2gb - 10kRPM)
DATA - raid 0+1 (# X 18.2gb - 15kRPM)
usually a dataset is comprised of 6-10 disks.. you could go larger with
the drive size.. but more spindles = more thruput
Swap isn't much of
My favourite for dedicated db servers (good amount of data, but a ton of
access, queries/sec) is raid 0+1 ... fast access, but requires a good number of
disks..
Be sure to use a good raid controller, multiple channels for the disks if
possible..
On Sat, 5 Jul 2003, Jim McAtee wrote:
What
Just shutdown mysql, move the data and create a symlink.. or start
safe_mysqld with --datadir= option
On Mon, 30 Jun 2003, Claudio Alonso wrote:
Hi, I'd like to know if there's a way to change where the datadir is
located.
I've installed mysql in a Sun Solaris, in the /usr/local/mysql
If you want to store images in the database, use a blob columntype.. And
take a look at this example alot of people have based mysql binary
storage off: http://www.php4.com/forums/viewtopic.php?t=6
good luck
On Sun, 29 Jun 2003, Digital Directory USA wrote:
I am new to php and mysql, I have
Shouldn't be a problem.. if you already have a private link between sites
your set, if not drop in a vpn solution to ensure end to end security..
The only main difference betten your situation and most (all servers are
feet apart at LAN speed) is the WAN thruput/etc.. Mysql should keep
I'd instead setup a 2nd backup server that's a slave to the master,
replicates all the time, keeps in sync.
At X time, stop replication/mysql, backup data to tape .. restart mysql
and it will catch up/re sync back to master..
On Thu, 26 Jun 2003, SAQIB wrote:
mysqlhotcopy does your locking
I'm guessing blob data? ~1500MB / 400rows = ~3.75MB /row
On Fri, 6 Jun 2003, Jeremy Zawodny wrote:
On Fri, Jun 06, 2003 at 09:36:08AM +0200, H M Kunzmann wrote:
Hi all.
I am running RH9.0 with MySQL 4.0.13
I am trying to create a fulltext index on a 1.5GB table with 400
So you have no redundancy? 5 arrays of raid 0 (2 disks each) = lose a
disk and your pooched..
suggestion:
reconfigure to raid 0+1 (more than 2 disks a set) for added perf ?
On Fri, 6 Jun 2003, Sam Jumper wrote:
What steps can be taken to speed up queries that show state=copy to tmp
table in
I've used a simply shell script in the past.. run from cron to do it...
I just see someone posted a perl solution.. I've used a php one aswell..
#!/bin/sh
# DB OPTIMIZE SCRIPT - !! WARNING, DOES TABLE LOCKING DURING OPTIMIZES
user=root
pass=secret
host=10.1.1.1
db=mydb
[EMAIL PROTECTED]
Search the mailing list archives for this... There is a link to this
article:
http://www.php4.com/forums/viewtopic.php?t=6
I wonder if the mailinglist search was powered by google more people would
use it?
On Tue, 27 May 2003, Thomas Hoelsken wrote:
Hi,
I would like to fill an Blob with
Mascon can do it .. it's a win32 app..
On Tue, 27 May 2003, Thomas Hoelsken wrote:
Hi,
isn't there any other solution instead of using PHP just for filling an
Blob!?
I don't need php and would prefer any other way!
Thanks,
Thomas
-Original Message-
From: [EMAIL PROTECTED]
Depending on size of data there are a few different methods... Just like
what most people do.. use plain insert statements with the data properly
escaped and shouldn't have any problem going in.
Pulling data out is pretty quick .. I can stream binary data out of my
mysql storage servers via our
96 matches
Mail list logo