> Max Nitribitt wrote:
> 
> I'm trying to create a secondary index on a table
> filled with about 5 600 000 000 rows. There are about
> 1 000 000 000 distinct values in the indexed Column. 
> This task is running for more than 24 hours now.
> It's my first try, so I would like to ask you, if this
> is still in the range that you would expect? How could
> I speed this up (this is a task I will have to perform
> frequently)?
> 

To increase index creation performance you could check some database
parameters.

1. DATA_CACHE should be as big as possible to minimize disk I/O.

2. Increase the parameter _IDXFILE_LIST_SIZE. 
Possible values are 0 (=Off), 128, 256, 512, 1024, 2048, 4096, 8192.
This parameter influences the create index algorithm. It defines the maximum

number of temporary buffers used for the merge algorithm. A larger number of

buffers decreases the number of merge steps, and thus increases performance.

But DATE_CACHE must be at least three times larger then _IDXFILE_LIST_SIZE.

3. Spread your database over several disks to increase performance by
parallel I/O.

You can set database parameters via dbmgui or dbmcli. 

HTH,
Holger

SAP Labs Berlin

Reply via email to