Hi,
You can PR your idea, but likely with a configuration setting for 
controlling this behavior (like oracle.reader_fetch_rows), and disabled by 
default (like 0 or not set => no change to the reader fetch size). There is 
a strong trend going toward reducing memory needs, so enabling by default a 
feature which augment memory consumption would likely not be accepted.

Since loading hundreds of rows is not a typical usage for an ORM, maybe it 
is not worth it, and then you would have to stick to your overrides.

On Wednesday, April 6, 2022 at 11:44:48 AM UTC+2 David_L wrote:

> I have faced a similar problem like in 
> https://groups.google.com/g/nhusers/c/XLDf_muFWMc/m/IRHDsfcAcOkJ.
> A large number of rows and a big amount of data per row reduces the 
> loading performace of queries in NHibernate using Oracle.
>
> Fabio Maulo suggested 
> https://groups.google.com/g/nhusers/c/XLDf_muFWMc/m/9QL5gRczcvcJ. But 
> that solution is not query specific as the FetchSize is set to a constant 
> value again. It would be better to set the FetchSize after creating the 
> DataReader, when the RowSize is calcualted.
>
> Here is my solution (I user OracleManagedDataAccess): 
>
>     public class MyOracleManagedDataClientDriver : 
> OracleManagedDataClientDriver, IEmbeddedBatcherFactoryProvider {
>         System.Type IEmbeddedBatcherFactoryProvider.BatcherFactoryClass => 
> typeof(MyOracleDataClientBatchingBatcherFactory);
>     }
>
>     public class MyOracleDataClientBatchingBatcherFactory : 
> IBatcherFactory {
>         public IBatcher CreateBatcher( ConnectionManager 
> connectionManager, IInterceptor interceptor ) {
>             return new 
> MyOracleDataClientBatchingBatcher(connectionManager, interceptor);
>         }
>     }
>
>     public class MyOracleDataClientBatchingBatcher : 
> OracleDataClientBatchingBatcher {
>         public MyOracleDataClientBatchingBatcher(ConnectionManager 
> connectionManager, IInterceptor interceptor) : base(connectionManager, 
> interceptor) { }
>
>         public override DbDataReader ExecuteReader( DbCommand cmd ) {
>             var reader = base.ExecuteReader( cmd );
>             if( reader is OracleDataReader odr ) {
>                 var newSize = odr.RowSize * 1000; // PrefetchRows = 1000
>                 if( odr.FetchSize < newSize )
>                     odr.FetchSize = newSize;
>             }
>             return reader;
>         }
>     }
>     
> I simply create a derived Batcher that overrides ExecuteReader() and sets 
> the FetchSize for 1000 rows. In most cases this creates a fetch buffer that 
> is not too big but ideal for fetching exact 1000 rows per round trip. This 
> PrefetchRow size is fixed 1000 in my solution but it could be configurable 
> in future.
>
> I had the problem to select 40.000 rows of a table that has a RowSize of 
> 70KB. The default FetchSize of Oracle is set to (very conservative) 128KB 
> which means that each row is fetched in a single round trip. It took 450s 
> to get all rows. With my optimization the retrieval time is reduced to 20s.
>
> Wouldnt it be a good solution to adapt in the standard 
> OracleDataClientBatchingBatcher? It boosted the performance of almost all 
> my queries. Wouldnt it be good for all users of NHibernate and Oracle to 
> have this boost?
>

-- 
You received this message because you are subscribed to the Google Groups 
"nhusers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to nhusers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/nhusers/3a7329f9-b38c-4ed1-a19c-224a9ccae115n%40googlegroups.com.

Reply via email to