Pawel (privately) wrote:
Sure - as I said, it might be that the way your data has expanded has just caused the sort to run out of memory, rather than there being any leaks. Change to the watson allocator (have you tried this?), could make a huge difference because it knows how to coagulate lots of mallocs back together when they are freed.Hi Mike, Hi Jim, I more or less understood, which kind of workaround Jim is suggesting. Problem is officially reported via CSHD and we are waiting. Nothing is confirmed, it happened many times that I was mistaken or made wrong assumptions/suggestions. Yes - eventually, even the most efficient system will run out of space.I just wonder now how it can happen that we did not add new environments, neither release significant changes to the software (as far as I know) and memory finally run out. Perhaps growing data volumes can explain that? We regularly refresh test areas with LIVE system. So SSELECTs are launched on bigger and bigger files. To be honest, a big issue here is the whole design thing. This stuff should be running via message queues, distributing the work piece by piece and running huge serial selects, but there isn't much to be done about that. Yes - the illogic of not installing patches for known bugs is endemic in the industry. "How do we know it won't break something else?" is the cry, but the answer is that "This is rare, but what you do know 100% absolutely, is that you have bugs on your system which you are subjecting yourself to; are you crazy?"We are running T24 R06004 with many patches installed. Bank did not decide to install T24 Service Packs and it was in my opinion bad decision to me. So it happens that we are discovering already discovered and patched things, which is not funny at all :( Did you try changing that field to be left justified?1. SSELECTs are performed by core and (unfortunately) local developments. Select I have shown (on watson) was done on FBNK.ACCOUNT.DEBIT.INT table. It is regular J4, not distributed file. We tried SSELECTs on few tables and effect was the same. It would be anyway with the requisite indexes.2. I generally agree, but your statement is true only when you do not use Oracle driver. I think that SELECT/READNEXT combinations should be avoided. Perhaps in the future Temenos will optimize SELECT <file> / SSELECT <file> and getting keys on jBASE will be so fast as on Oracle? A query based select will never beat a good hand crafted data selector. You should always write these in jBC with SELECT and readnext and either use the built in indexing, or some application specific stuff of your own. All 'queries' should be disk bound, but otherwise instant. The select engine in 4.1 and above can get close to parity if you have created the right indexes, but if your data gets too big, then this can be moot. Hand code, always, no questions. The COB runs every day that the bank exists and if it takes you three weeks to write the same query in jBC (It won't) then the overall savings in time and money are too good to turn down.I am not an expert, but I belive that they have field to improvements and it will be easier to do something to tune commands like SELECT than to tune internal SELECT/READNEXT. Perhaps in jBASE 6 SELECT on distributed file will launch multiple threads and partfiles will be scanned in parallel? It is just a stupid example, because I do not expect such improvements (who SELECTs distributed files?) By the way: we do not run on Oracle, but would like to keep portability. Such attributes do not transfer between processes ;-). The problem with the COB idea of multi-processing is that it isn't really multi-processing when everything is waiting for one SELECT. A SELECT-READNEXT could start queuing with the first result - however a clever B correlative could be made to do the same thing.3. Which kinds of memory issues? Either memory is correctly retained or not. We try not to do stupid things and use multiagent jobs as much as we can. We try not to create large transactions and find old local jobs that do. I personally think that memory must be somehow given back even if 1 of the jobs is stupid. Why job #2 is limited by previously executed, stupid job? I expect that if job #1 allocates 300 MB, then job #2 will be executed with (more or less) same initial memory utilization. Different thing altogether :-)4. R06, I have seen that. Did they introduce that because memory leaked or internal SELECT is faster? I am joking of course. We did not test SELECT much, because we seen that memory utilization during COB increases after SSELECTs are run. Perhaps SELECT leaks also something. Depends a little on how the query runs, but if the result of the SSELECT is just the keys, then the memory used at the end will be the size of the key set. However, if you are asking for data columns as well, then you might find the memory requirements are greater because it will avoid re-reading the file if it can and things like that.5. ? Hmmm... You need to first collect all keys and store them in memory somehow, then to sort them with some algorithm, right? Sorting process may require more memory than just size of the keys, but this memory should be given back to process once SSELECT is completed. Can't be certain yet. The memory allocation tables have a huge effect on this and is one of the big advantages of watson, which I hope you are at least going to try as soon as possible on your test machine. I have explained this in the past, but the amount of memory that is in use, isn't always the size of the heap. Suppose I allocate 5 bytes, then I allocate 50MB, then I allocate 10bytes, then I free teh 50Mb. The 50 Mb becomes available to future mallocs, but the 10 bytes are still there. So we have a big gap in there. Now, even the current malloc algorithm has guards against simplistic things like this example, but watson is much better at dealing with the whole problem. Problems occur such as allocation and reallocating a growing buffer. eventually you free the old buffer, but need a new bigger one and can never reuse the previous memory, so you run out not so much of memory, but places to put your huge buffer. This is the kind of thing that watson is much better at dealing with, and it is the kind of thing going on in your SSELECT.I think that it is not (fully?) in jBASE 4.1.5.17. Please try watson and tell us the results on your test system.Let's leave this subject for confirmation :) As I said in the begining I was mistaken many times. Jim Kind regards PawelDnia 6-02-2009 o godz. 1:47 mike ryder napisał(a): --~--~---------~--~----~------------~-------~--~----~ Please read the posting guidelines at: http://groups.google.com/group/jBASE/web/Posting%20Guidelines IMPORTANT: Type T24: at the start of the subject line for questions specific to Globus/T24
To post, send email to [email protected]
|
- Re: jBASE 4.1.5.17 - does anyone face "out of ... Mike Preece
- Re: jBASE 4.1.5.17 - does anyone face "out... Pawel (privately)
- Re: jBASE 4.1.5.17 - does anyone face "... Pawel (privately)
- Re: jBASE 4.1.5.17 - does anyone face "... Jim Idle
- Re: jBASE 4.1.5.17 - does anyone face "out of memor... mike ryder
- Re: jBASE 4.1.5.17 - does anyone face "out of ... Pawel (privately)
- Re: jBASE 4.1.5.17 - does anyone face "out... Jim Idle
- Re: jBASE 4.1.5.17 - does anyone face "out of ... Pawel (privately)
