[ 
http://issues.apache.org/jira/browse/DERBY-802?page=comments#action_12421121 ] 
            
Fernanda Pizzorno commented on DERBY-802:
-----------------------------------------

I have looked at the patch (derby-802.diff), and I have a few 
comments/questions.

In ScrollInsensitiveResultSet you are building an array for mapping the 
positions in the original row to the positions on the projected row (origPos).

+                       // Array of original position in row
+                       final int[] origPos = new int[newRowData.length];
+                       
+                       for (int i=0; i<origPos.length; i++) {
+                               origPos[i] = -1 ; 
+                       }
+
+                       // Make the origPos, by brute-force comparison of 
identity:
+                       for (int i=0; i<newRowData.length; i++) {
+                               for (int j=0; j<rowData.length; j++) {
+                                       if (rowData[j]==newRowData[i]) {
+                                               origPos[i] = j;
+                                               break;
+                                       }
+                               }
+                       }

[...]

+                       for (int i=0; i<origPos.length; i++) {
+                               if (origPos[i]>=0) {
+                                       rowData[origPos[i]] = backedData[i];
+                               }
+                       }

ProjectRestrictResultSet already contains (projectMapping) a similar array to 
the one you are building here (origPos), wouldn't it be better to build this 
array in ProjectRestrictRestultSet using "projectMapping" instead of using 
brute-force comparison of identity?

Suggestion:

ScrollInsensitiveResultSet:
+                       // Array of original position in row
+                       int[] origPos = 
((ProjectRestrictResultSet)source).getBaseProjectMapping();

[...]

+                       for (int i=0; i<origPos.length; i++) {
+                               if (origPos[i]>=0) {
+                                       rowData[origPos[i] - 1] = backedData[i];
+                               }
+                       }

ProjectRestrictResultSet:
+    public int[] getBaseProjectMapping() {
+        int[] result;
+        if (source instanceof ProjectRestrictResultSet) {
+            result = new int[projectMapping.length];
+            ProjectRestrictResultSet prs = (ProjectRestrictResultSet) source;
+            int[] sourceMap = prs.getBaseProjectMapping();
+            for (int i=0; i<projectMapping.length; i++) {
+                if (projectMapping[i] > 0) {
+                    result[i] = sourceMap[projectMapping[i] - 1];
+                }
+            }            
+        } else {
+            result = projectMapping;
+        }
+        return result;
+    }

I have also looked into the tests added in DERBY-1477 and I found out that 
neither of them use projection. Since part of the changes made in DERBY-802 are 
for cases where you have a projection, it would be nice if you could add some 
tests where projections where being used.




> OutofMemory Error when reading large blob when statement type is 
> ResultSet.TYPE_SCROLL_INSENSITIVE
> --------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-802
>                 URL: http://issues.apache.org/jira/browse/DERBY-802
>             Project: Derby
>          Issue Type: Bug
>          Components: JDBC
>    Affects Versions: 10.0.2.0, 10.0.2.1, 10.1.1.0, 10.1.1.1, 10.1.1.2, 
> 10.1.2.0, 10.1.2.1, 10.2.0.0, 10.1.3.0, 10.1.2.2, 10.0.2.2
>         Environment: all
>            Reporter: Sunitha Kambhampati
>         Assigned To: Andreas Korneliussen
>            Priority: Minor
>         Attachments: derby-802.diff, derby-802.stat, derby-802v2.diff
>
>
> Grégoire Dubois on the list reported this problem.  From his mail: the 
> reproduction is attached below. 
> When statement type is set to ResultSet.TYPE_SCROLL_INSENSITIVE, outofmemory 
> exception is thrown when reading large blobs. 
> import java.sql.*;
> import java.io.*;
> /**
> *
> * @author greg
> */
> public class derby_filewrite_fileread {
>    
>     private static File file = new 
> File("/mnt/BigDisk/Clips/BabyMamaDrama-JShin.wmv");
>     private static File destinationFile = new 
> File("/home/greg/DerbyDatabase/"+file.getName());
>    
>     /** Creates a new instance of derby_filewrite_fileread */
>     public derby_filewrite_fileread() {       
>        
>     }
>    
>     public static void main(String args[]) {
>         try {
>             
> Class.forName("org.apache.derby.jdbc.EmbeddedDriver").newInstance();
>             Connection connection = DriverManager.getConnection 
> ("jdbc:derby:/home/greg/DerbyDatabase/BigFileTestDB;create=true", "APP", "");
>             connection.setAutoCommit(false);
>            
>             Statement statement = 
> connection.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, 
> ResultSet.CONCUR_READ_ONLY);
>             ResultSet result = statement.executeQuery("SELECT TABLENAME FROM 
> SYS.SYSTABLES");
>            
>             // Create table if it doesn't already exists.
>             boolean exist=false;
>             while ( result.next() ) {
>                 if ("db_file".equalsIgnoreCase(result.getString(1)))
>                     exist=true;
>             }
>             if ( !exist ) {
>                 System.out.println("Create table db_file.");
>                 statement.execute("CREATE TABLE db_file ("+
>                                            "     name          VARCHAR(40),"+
>                                            "     file          BLOB(2G) NOT 
> NULL)");
>                 connection.commit();
>             }
>            
>             // Read file from disk, write on DB.
>             System.out.println("1 - Read file from disk, write on DB.");
>             PreparedStatement 
> preparedStatement=connection.prepareStatement("INSERT INTO db_file(name,file) 
> VALUES (?,?)");
>             FileInputStream fileInputStream = new FileInputStream(file);
>             preparedStatement.setString(1, file.getName());
>             preparedStatement.setBinaryStream(2, fileInputStream, 
> (int)file.length());           
>             preparedStatement.execute();
>             connection.commit();
>             System.out.println("2 - END OF Read file from disk, write on 
> DB.");
>            
>            
>             // Read file from DB, and write on disk.
>             System.out.println("3 - Read file from DB, and write on disk.");
>             result = statement.executeQuery("SELECT file FROM db_file WHERE 
> name='"+file.getName()+"'");
>             byte[] buffer = new byte [1024];
>             result.next();
>             BufferedInputStream     inputStream=new 
> BufferedInputStream(result.getBinaryStream(1),1024);
>             FileOutputStream outputStream = new 
> FileOutputStream(destinationFile);
>             int readBytes = 0;
>             while (readBytes!=-1) {
>                 readBytes=inputStream.read(buffer,0,buffer.length);
>                 if ( readBytes != -1 )
>                     outputStream.write(buffer, 0, readBytes);
>             }     
>             inputStream.close();
>             outputStream.close();
>             System.out.println("4 - END OF Read file from DB, and write on 
> disk.");
>         }
>         catch (Exception e) {
>             e.printStackTrace(System.err);
>         }
>     }
> }
> It returns
> 1 - Read file from disk, write on DB.
> 2 - END OF Read file from disk, write on DB.
> 3 - Read file from DB, and write on disk.
> java.lang.OutOfMemoryError
> if the file is ~10MB or more

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to