Re: Reading blobs larger than a certain size fails with memory error
Thank you very much. It works ! Greg Le dimanche 08 janvier 2006 15:21 -0800, Daniel John Debrunner a crit : Grgoire Dubois wrote: Here is some sample code. Thanks for the repro. If you change the statement type to ResultSet.TYPE_FORWARD_ONLY from ResultSet.TYPE_SCROLL_INSENSITIVE then the problem does not exist. So it's a bug, but there is a workaround. Dan.
SELECT very slow when BLOB(2G)
Hi all, Here is the select I do on the following table. If file is a BLOB(2G), the request is very very slow (30-60s), even if there is only one line for the table. But if I replace BLOB(2G) by BLOB(5M) or BLOB(1G), the request becomes very fast. Is there a reason ? Is there a workaround ? Thank you. Best regards. SELECT DISTINCT db_file.ID,db_file.name,db_file.reference,db_file.hash FROM db_file ORDER BY db_file.name CREATE TABLE db_file ( ID INT GENERATED ALWAYS AS IDENTITY (START WITH 1,INCREMENT BY 1), file BLOB ( 2G) NOT NULL, name VARCHAR(256) NOT NULL, hash VARCHAR( 40) NOT NULL, size INT NOT NULL, reference VARCHAR( 32) NOT NULL, PRIMARY KEY (ID)); CREATE INDEX db_file_name_index ON db_file (name) CREATE UNIQUE INDEX db_file_hash_index ON db_file (hash) CREATE INDEX db_file_reference_index ON db_file (reference)
Re: Selecting first N rows of a result set
Bernt M. Johnsen wrote: Hi, Please post your SQL and JDBC code. That might make it possible to us to see whay you have performance problems. The SQL is very simple: SELECT * FROM system_log_view (although there may be optional search and ordering criteria as well, as specified by the user, but the query in this form is the one that takes a long time). I construct the query on the fly with holes for any extra criteria, create a PreparedStatement and then execute it using the values supplied by the user for any criteria. Without criteria, there are about 6000 records selected. The view is a bit of a pig, due to date formatting and so on. The date formatting is done so that users can refine the search by specifying '20-Dec' or some such. CREATE TABLE month_names ( monthno SMALLINT NOT NULL PRIMARY KEY, monthname CHAR(3) NOT NULL ); INSERT INTO month_names VALUES (1, 'Jan'), (2, 'Feb'), (3, 'Mar'), (4, 'Apr'), (5, 'May'), (6, 'Jun'), (7, 'Jul'), (8, 'Aug'), (9, 'Sep'), (10,'Oct'), (11,'Nov'), (12,'Dec'); CREATE VIEW system_log_view AS SELECT DISTINCT RTRIM(CHAR(DAY(time))) || '-' || monthname || '-' || RTRIM(CHAR(YEAR(time))) || ' ' || TIME(time) AS x_time, facility, event, details, CASE WHEN system_log.username IS NULL THEN '(system)' ELSE surname || ', ' || initials END AS name, system_log.username AS username, module, test, time, id FROMsystem_log, users, month_names WHERE (system_log.username IS NULL OR system_log.username=users.username) AND monthno=MONTH(time); I then want to be able to display pages of 20 rows at a time, which I do by using res.absolute(n) to go to the first row of the page and then iterating using res.next() 20 times. Generally in JDBC, the way to limit the number of rows returned from a query is stmt.setMaxRows(N). [EMAIL PROTECTED] wrote (2005-12-15 14:46:13): I know this has been asked before, but I haven't been able to locate the answer, so can anyone supply me with at least a link to it? I have a table (currently about 5000 rows, each fairly large) and I select some or all of those (by default, all of them, and I can then refine it down), but I display the selected rows in pages of 20 at a time. It's getting horribly slow now the table has grown, and I wonder if selecting just 20 rows rather than a result set of 5000 will give me more acceptable performance... -- John English | mailto:[EMAIL PROTECTED] Senior Lecturer | http://www.it.bton.ac.uk/staff/je School of Computing MIS | Those who don't know their history University of Brighton| are condemned to relive it (Santayana) --
Re: Reading blobs larger than a certain size fails with memory error
Daniel John Debrunner wrote: So it's a bug, but there is a workaround. I have opened Jira entry - DERBY 802 for this. Thanks, Sunitha.
Re: SELECT very slow when BLOB(2G)
I don't read the blob in my request (the blob is db_file.file, and it isn't used in the select) : SELECT DISTINCT db_file.ID,db_file.name,db_file.reference,db_file.hash FROM db_file ORDER BY db_file.name; And the slowliness of the request isn't related to the data filled in the blob, it is only related to the definition of the table. If I define file BLOB(2G) NOT NULL in my table, the request will be slow. If I define file BLOB(5M) NOT NULL in my table, the request will be fast. If I define file BLOB(1G) NOT NULL in my table, the request will also be fast. Le lundi 09 janvier 2006 09:27 -0800, Sunitha Kambhampati a crit : Grgoire Dubois wrote: Hi all, Here is the select I do on the following table. If file is a BLOB(2G), the request is very very slow (30-60s), even if there is only one line for the table. But if I replace BLOB(2G) by BLOB(5M) or BLOB(1G), the request becomes very fast. Is there a reason ? If you are retrieving a blob of 2G -1 size, the time taken will be more than if you are retrieving a blob of lets say 5M because you are reading more data in case of the 2G blob from the disk than the 5M blob. But if the data in the file that you insert in the blob(2G) column and the blob(5M) column is the same, then I wouldnt expect the response time to vary. Can you please confirm specifically what size data you are inserting into the blob(2G) and the blob(5M) columns where you are seeing the difference. Thanks, Sunitha. Is there a workaround ? Thank you. Best regards. SELECT DISTINCT db_file.ID,db_file.name,db_file.reference,db_file.hash FROM db_file ORDER BY db_file.name CREATE TABLE db_file ( IDINTGENERATED ALWAYS AS IDENTITY (START WITH 1,INCREMENT BY 1), file BLOB ( 2G) NOT NULL, name VARCHAR(256) NOT NULL, hash VARCHAR( 40) NOT NULL, size INT NOT NULL, reference VARCHAR( 32) NOT NULL, PRIMARY KEY (ID)); CREATEINDEX db_file_name_index ON db_file (name) CREATE UNIQUE INDEX db_file_hash_index ON db_file (hash) CREATEINDEX db_file_reference_index ON db_file (reference)
Re: What is performance differenc(es) between 1 index per column or one index for multiple column?
As with everything in databases, performance depends on what you're doing. (What follows is general database information, not Derby specific.) If you have a query like: select bannerbannerid from SSiteRequest where BANNERBANNERID=6 and WEBSITESWEBSITEID=10 and USERSUSERID=1 then the composite index will probably be faster. However, if your statements are like: select bannerbannerid from ssiterequest where useruserid=3 then the second index scheme will work better. In fact, in this example, the first index couldn't be used at all because it's not the first field in the index. (Which isn't always strictly true anymore. I know Oracle can use secondary fields like a primary field in some situations.) ry Hi Thank you very much for reading my post. can you please explain me what is differences between 1 index for one column and one index for multiple columns ? in both performance view and technical differences. example : create index Index1 on SSiteRequest (BANNERBANNERID, WEBSITESWEBSITEID, USERSUSERID); and create index Index13 on SSiteRequest (USERSUSERID); create index Index12 on SSiteRequest (WEBSITESWEBSITEID); create index Index1 on SSiteRequest (BANNERBANNERID); does this two kind differ from each other ? which one will have better performance?
RE: Problems opening DB on HP Tandem
Title: Re: Problems opening DB on HP Tandem Thanks for getting back to me. Yes, HP does indeed have a JVM at 1.4.2, but unfortunately we are not really close to upgrading yet - different department altogether...:( And yes, I did get the same error when trying to create the db. I will probably wait until the IT group can upgrade the box to 1.4.x. I am using it on a WinXP box with no issues (same JVM version - 1.3.x). Thanks anyways. From: Rajesh Kartha [mailto:[EMAIL PROTECTED]Sent: Mon 1/9/2006 3:03 PMTo: Derby DiscussionSubject: Re: Problems opening DB on HP Tandem George, Kenneth V [NTK] wrote:I am not sure this went through back to the group, so here it is again for those who did not receive it.Any more assitance on this would be much appreciated.Thanks.From: George, Kenneth V [NTK]Sent: Thu 1/5/2006 3:19 PMTo: Derby Discussion; Derby DiscussionSubject: RE: Problems opening DB on HP TandemOk, here are the results:/usr/tandem/web_applications/ctc/ctc83/fm/db: /usr/home/ken/ij.shij version 10.1ij connect 'jdbc:derby:filemanager';ERROR XJ040: Failed to start database 'filemanager', see the next exception for details.ERROR XJ001: Java exception: ' implement interface error : java.lang.AbstractMethodError'.ij connect 'jdbc:derby:/usr/tandem/web_applications/ctc/ctc83/fm/db/filemanager';ERROR XJ040: Failed to start database '/usr/tandem/web_applications/ctc/ctc83/fm/db/filemanager', see the next exception for details.ERROR XJ001: Java exception: ' implement interface error : java.lang.AbstractMethodError'.ijI tried it both ways - in the directory where the DB lives, and specfying the entire path - same result.___From the above. it is clear you cannot boot you existing database. Iassume you would see the same error duringdatabase creation also on the same platform [(is that true ? )...theurl to create a db would be 'jdbc:derby:adb;create=true']Not sure, but the above exception then could be JVM related.Is there any way that the above can be tried with jdk 1.4.1.(Hp has a jdk1.4.1 for NonStop :http://h20223.www2.hp.com/NonStopComputing/cache/82889-0-0-0-121.html)Regards,Rajesh
Re: How i can get current Date in sql statemetn ?
Legolas Woodland wrote: Hi Thank you for reading my post. how i can get current date in derby SQL ? something like Date() ?? http://db.apache.org/derby/docs/10.1/ref/rrefsqlj34177.html thanks, bryan
Deby create LOCK on my table and do not release it , what is my mistake ?
Hi Thank you for reading my post. I have an update statement which i execute over a table in my database. now when i execute the update statement from my application (even after i exit the method that update the table) i can not use select statement over the same table. here is my update statement: String sql = insert into FILTERED_BANNER(BANNERBANNERID, WEBSITESWEBSITEID) values (+extractBannerID(Test)+,+ getDropDown1().getSelected().toString()+) ; st.executeUpdate(sql) con.commit(); con.close(); st and con are defined in constructor , connection is not auto commit. , and con come from a datasource. here is the derby message after execution of select statement : select * from FILTERED_BANNER; and derby says : org.apache.derby.client.am.SqlException: A lock could not be obtained within the time requested
When we run derby , in console window it show some connection numbers , what are them ?
Hi Thank you for reading my post. I defined a datasource and connection pooling in my web application. maximum size of pool is 32 , after some times that i deploy-undeploy the application in development environment , derby console windows show which after starting shows connection numbers , show some very high numbers for example : Connection number: 96. Connection number: 97. Connection number: 98. Connection number: 99. Connection number: 100. Connection number: 101. Connection number: 102. does it means that i have 102 connection open over my database ? If so , what is connection pool job ?
Re: SELECT very slow when BLOB(2G)
Ok, I must have a problem, as I seem to have the same problem with mysql. Thanks for your help. Best regards. Le lundi 09 janvier 2006 20:02 +0100, Grgoire Dubois a crit : I don't read the blob in my request (the blob is db_file.file, and it isn't used in the select) : SELECT DISTINCT db_file.ID,db_file.name,db_file.reference,db_file.hash FROM db_file ORDER BY db_file.name; And the slowliness of the request isn't related to the data filled in the blob, it is only related to the definition of the table. If I define file BLOB(2G) NOT NULL in my table, the request will be slow. If I define file BLOB(5M) NOT NULL in my table, the request will be fast. If I define file BLOB(1G) NOT NULL in my table, the request will also be fast. Le lundi 09 janvier 2006 09:27 -0800, Sunitha Kambhampati a crit : Grgoire Dubois wrote: Hi all, Here is the select I do on the following table. If file is a BLOB(2G), the request is very very slow (30-60s), even if there is only one line for the table. But if I replace BLOB(2G) by BLOB(5M) or BLOB(1G), the request becomes very fast. Is there a reason ? If you are retrieving a blob of 2G -1 size, the time taken will be more than if you are retrieving a blob of lets say 5M because you are reading more data in case of the 2G blob from the disk than the 5M blob. But if the data in the file that you insert in the blob(2G) column and the blob(5M) column is the same, then I wouldnt expect the response time to vary. Can you please confirm specifically what size data you are inserting into the blob(2G) and the blob(5M) columns where you are seeing the difference. Thanks, Sunitha. Is there a workaround ? Thank you. Best regards. SELECT DISTINCT db_file.ID,db_file.name,db_file.reference,db_file.hash FROM db_file ORDER BY db_file.name CREATE TABLE db_file ( IDINTGENERATED ALWAYS AS IDENTITY (START WITH 1,INCREMENT BY 1), file BLOB ( 2G) NOT NULL, name VARCHAR(256) NOT NULL, hash VARCHAR( 40) NOT NULL, size INT NOT NULL, reference VARCHAR( 32) NOT NULL, PRIMARY KEY (ID)); CREATEINDEX db_file_name_index ON db_file (name) CREATE UNIQUE INDEX db_file_hash_index ON db_file (hash) CREATEINDEX db_file_reference_index ON db_file (reference)