I am not aware of _any_ "malloc problem" in an SMP environment.  I
run all sorts of codes on my quad xeon.  you _can_ break anything
if you write sloppy code.  IE if someone says that their code is
spending 50% of total time in malloc/free, I'd respond that they
ought to find a more efficient way of doing whatever they are
doing.

Done right, linux SMP works like a charm.  Done wrong, _any_
SMP machine can perform like a dog.  Done wrong, any UP machine
can perform like a dog...

Just write reasonable code and it will _fly_ under linux SMP.


Robert Hyatt                    Computer and Information Sciences
[EMAIL PROTECTED]               University of Alabama at Birmingham
(205) 934-2213                  115A Campbell Hall, UAB Station 
(205) 934-5473 FAX              Birmingham, AL 35294-1170

On Tue, 10 Aug 1999, Shane Miller wrote:

> Vincent:
> 
> Dammit! I just ordered the MB + CPUs ... My coding environment
> will be in linux but i guess i can run the executable on NT. I've
> heard that NT works well for 2-processor systems (but it does not
> scale well over two on the other hand).
> 
> one statement and six questions for you:
> 
> 1. on NT, multiple threads tend to *slow* performance when
>    a lot of memory allocation takes place. so much so, that
>    i've seen at least one add selling a special malloc lib
>    for multi-threaded systems. i'm picking up a lot of frus-
>    tration from your reply but apparently i NT sucks less 
>    than linux under these loads...
> 2. do you know why GCC malloc/free is slow? can a person link against
>    a another, better library? (eg. ala "debug malloc" style).
> 3. is GCC merely exagerating an underlying O/S problem or is
>    GCC malloc/free *the* problem?
> 4. Who has had better SMP experience with respect
>    memory management? on HPUX? solaris?
> 5. How did you deal with swap with your book size memory job?
> 6. Does linux support the concept of swap zones ala NeXT?
> 7. In my code, I have many smallish objects. Memory pools made
>    a huge difference. I wonder if these could help you out.
> 
> Thanks-
> Shane           
> 
> > 
> > don't do that in linux.
> > gcc library sucks bigtime.
> > can crash after some hours.
> > 
> > It does for my prog which is using bigtime alloc/deallocate
> > when reading a book (about a gigabyte), which is like 
> > reading in a buffer then merging. during merging all kind of 
> > things need to be allocated and after merge deallocated.
> > 
> > Under NT no problem.
> > Under 95 ==> 3 times slower than NT
> > under 98 ==> crash , big bugs in caching in 98
> > under linux ==> slow slow slow ==> crash after 12 hours.
> > 
> > Greetings,
> > Vincent
> > 
> > At 08:13 AM 8/10/99 -0700, you wrote:
> > >Sir;
> > >
> > >i am beginning development of a CAD like program. it will be
> > >threaded and multi-processor capable (well thanks to SMP O/S
> > >like linux).
> > >
> > >as to the linux's memory management, suppose a program has
> > >30 threads. each thread is allocating and deallocating memory.
> > >
> > >does linux block the other 29 threads when thread <X> wants
> > >to malloc/free? how about other threads/processes in the process
> > >table? 
> > >
> > >in a regular, single-processor system, my original C++ program
> > >spent 50% of its time in malloc/free. i reduced this time very
> > >significantly by implementing memory pools. the basic algorithm is fairly
> > >easy to split up across threads. hence linux SMP for more performance.
> > >but i am wondering if i might be walking into a "technical snare"
> > >with respect to linux and memory management.
> > >
> > >best regards,
> > >Shane
> > >-
> > >Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/mentre/smp-faq/
> > >To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]
> > >
> > >
> > 
> 
> -
> Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/mentre/smp-faq/
> To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]
> 

-
Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/mentre/smp-faq/
To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]

Reply via email to