Juraj Trenkler wrote:
OpenSUSE10, MySQL 4.1.13, Notebook HP nx6110, 512 MB RAM, 1 GB swap. OOoB680
m_5.
I have problem import MySQL table from one odb file to another odb with
embedded database. Table has 270 kB and 2800 records with 10 fields. When I
try "copy" in a short time all capacity of CPU is used by soffice and RAM
memory and swap are full and after 20-30 minutes nothing happens. Only disk
is running and running...
Is this known bug or something else?
juraj
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
Juraj,
Is this a copy as in you drag and drop the table from the one file to
the other?
You may very well want to consider opening an Issue on this, if one it
not already there.
After reading your post I tried a small test here. My setup is not quite
identical:
WinXP Sp2, MySQL 5.0.18, HP a810n, 640 MB Ram (512 mb to the OS, 128 mb
to video) and OO 2.0.2
I moved two tables via drag and drop from a Base( MySQL ) file to Base
(HSQL) file.
The first table is 4 columns and 4,047 records. Storage space in MySQL
is 160kb.
This table copied over in less then 1 minute. Not stellar but one can
live with it.
The second table is 21 columns (mostly numerics) and 85,978 records.
Storage space in MySQL is 6.5 Meg.
The table copied over, taking over 11 minutes and yes the CPU is pegged
during the whole transfer.
In other words the rest of the system pretty much becomes useless - a
forced coffee break from work.
Now, I am wondering how much of this is because of the auto-commit
status of the Base(HSQL) file?
I moved the larger table a second time, with the task manager (
performance ) dialog open.
Watching how the memory was rising and falling during the transfer.
Small chunks would be allocated and then released (4 K chunks ) this
would happen a number of times.
Then a large chunk allocated and held ( 48 K), the memory usage in other
words did not grow in a steady ( linear fashion ) but rather in a stair
step fashion.
Don't know if that means any thing, perhaps just a red herring, but it
may be that the memory allocation is the bottle neck?
The developers would be the ones to address that question.
But that doesn't help you, does it.
It might be that transfering the records via a basic macro, and using an
isoloated connection to the HSQL base file would help.
You could move say 100 (500, 1000 whatever) records at a time and then
do a commit on the batch. This should go faster.
I think a quick wait command might help free up the CPU also, but that
is pure conjecture on my part at the moment.
( this should also be more important with WinXP then Linux I would think
- cooperative multitasking and all that )
Anyhow, today is a short day for me.
I can put together a quick test of the basic macro idea later, on my system.
I already have code that does an import for an app. at a client site,
but from a cvs datasource to an HSQL base file.
It won't take much to alter it to grab the data form MySQL instead. I
will let you know how it affects performance when I am done.
If it makes a big difference you are welcome to it. You should be able
adujst it to your tables fairly easily.
Drew
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]