[
https://issues.apache.org/jira/browse/DERBY-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rick Hillegas updated DERBY-3155:
---------------------------------
Attachment: derby-3155-03-ae-backingStoreHashtableWithRowLocation.diff
Attaching derby-3155-03-ae-backingStoreHashtableWithRowLocation.diff. This
patch adds support for BackingStoreHashtables which include a RowLocation for
each row. I am running tests now.
This patch intends to not degrade the performance of the existing code paths
for BackingStoreHashtables which don't include RowLocations.
BackingStoreHashtables which include RowLocations may incur some extra
performance drag.
I have hand-tested the changes by running some ad-hoc tests which use in-memory
hash tables and by running SpillHashTest. I ran these tests with the current
patch and with a dummy version which forced all BackingStoreHashtables to
include RowLocation information.
This patch makes the following changes:
-------------------------
A java/engine/org/apache/derby/iapi/types/LocatedRow.java
1) Introduces a new class, LocatedRow, which is basically a struct containing
an array of column values followed by a RowLocation field.
-------------------------
M
java/engine/org/apache/derby/iapi/store/access/BackingStoreHashtable.java
M
java/engine/org/apache/derby/impl/store/access/BackingStoreHashTableFromScan.java
2) Makes a number of changes to BackingStoreHashtable:
a) Introduces a new public method, includeRowLocations(). This defaults to
return false. However, the BackingStoreHashTableFromScan subclass overrides
this method and may return true depending on its constructor args.
b) Changes the signature of add_row_to_hash_table() and putRow() to include a
RowLocation arg. This arg will be null when includeRowLocations() returns false.
c) When includeRowLocations() returns false, the behavior of the class is
pretty much unchanged. That is, the in-memory hash table continues to contain
DataValueDescriptor[] rows or buckets (lists) of those rows. If the hash table
spills to disk, DataValueDescriptor[] rows are written to disk. When they are
read back in, they continue to be either standalone DataValueDescriptor[] rows
or buckets (lists) of those rows.
d) When includeRowLocations() returns true, the in-memory hash table contains
LocatedRows and buckets (lists) of LocatedRows. If the hash table spills to
disk, DataValueDescriptor[] rows are written to disk; the last cell of these
rows is the RowLocation. When they are read back in, they are re-packaged as
LocatedRows or buckets of LocatedRows.
e) The memory usage methods have been adjusted to account for the extra
overhead when a LocatedRow is used instead of a plain DataValueDescriptor[].
-------------------------
M
java/engine/org/apache/derby/impl/store/access/conglomerate/GenericScanController.java
M
java/engine/org/apache/derby/impl/store/access/btree/BTreeForwardScan.java
M java/engine/org/apache/derby/impl/store/access/heap/HeapScan.java
3) Store changes to account for the new RowLocation arg added to the signatures
of add_row_to_hash_table() and putRow(). A new method was added to HeapScan for
constructing RowLocations when necessary.
-------------------------
M java/storeless/org/apache/derby/impl/storeless/NoOpTransaction.java
M
java/engine/org/apache/derby/iapi/store/access/TransactionController.java
M java/engine/org/apache/derby/impl/store/access/RAMTransaction.java
M
java/testing/org/apache/derbyTesting/unitTests/store/T_QualifierTest.java
4) TransactionController was adjusted to account for the new constructor arg
for BackingStoreHashTableFromScan.
-------------------------
M java/engine/org/apache/derby/iapi/store/build.xml
5) The build target for this package was changed to uncomment the lint
diagnostic for unchecked casts.
-------------------------
M java/engine/org/apache/derby/impl/sql/execute/DistinctScanResultSet.java
M java/engine/org/apache/derby/impl/sql/execute/HashScanResultSet.java
M java/engine/org/apache/derby/impl/sql/execute/NoPutResultSetImpl.java
M
java/engine/org/apache/derby/impl/sql/execute/ScrollInsensitiveResultSet.java
M java/engine/org/apache/derby/impl/sql/execute/UpdateResultSet.java
6) ResultSets in the execution layer have been changed to account for the new
constructor arg of BackingStoreHashTableFromScan. Changes have also been made
to account for the fact that the hash table can now return LocatedRows or
buckets of LocatedRows.
> Support for SQL:2003 MERGE statement
> ------------------------------------
>
> Key: DERBY-3155
> URL: https://issues.apache.org/jira/browse/DERBY-3155
> Project: Derby
> Issue Type: Improvement
> Components: SQL
> Reporter: Trejkaz
> Assignee: Rick Hillegas
> Labels: derby_triage10_10
> Attachments: derby-3155-01-ac-grammar.diff,
> derby-3155-02-ag-fixParserWarning.diff,
> derby-3155-03-ae-backingStoreHashtableWithRowLocation.diff,
> MergeStatement.html, MergeStatement.html, MergeStatement.html
>
>
> A relatively common piece of logic in a database application is to check for
> a row's existence and then either update or insert depending on its existence.
> SQL:2003 added a MERGE statement to perform this operation. It looks like
> this:
> MERGE INTO table_name USING table_name ON (condition)
> WHEN MATCHED THEN UPDATE SET column1 = value1 [, column2 = value2 ...]
> WHEN NOT MATCHED THEN INSERT column1 [, column2 ...] VALUES (value1 [,
> value2 ...])
> At the moment, the only workaround for this would be to write a stored
> procedure to do the same operation, or to implement the logic client-side.
--
This message was sent by Atlassian JIRA
(v6.1#6144)