hi all,

there are few unanswered question/clarification .

As i undertand

1. GPU is not an alternate to CPU(i dont know someone got confused there) .
2. GPU is a additional unit like math coprocessor so it can not be considered 
to be part of core OS as kernel already is bloated with so many additional 
things to do then it's core funcinality. if we add load balancing and manging 
of threads also it is going to be an overkill. Ideally kernel should be doing 
only scheduling(remember famous argument between linus and tanenbaum long time 
back )
3. in linux threads are inherently processes so in sense we are working with 
multiple processes.
4. GPU does not distribute the logic . to solve the problem using GPU it need 
to be broken down into parallel threads of execution . there fore if the 
applications are not written with GPU in foresight it would not be able to gain 
the speed up factor on the contrary we might loose more execution cycles due to 
overhead.
5. not only govt of india many enterprises aslo need it. for example i remember 
param was used for weather forecasting,other uses can be datamining etc 
6. supercomputer and clusters these are now used interchangeably design wise 
they are different like cray series super comps etc. though if you want to keep 
teraflops as the count cluster may give an impression of better performance but 
inherntly it would require the problem to be broken down and parallelised 
before computing.

I did a cluster for CDAC Noida for there teaching needs in 2003 based on OSCAR

http://svn.oscar.openclustergroup.org/trac/oscar

also showcased the improved results using linpack. it was made of old pentium 
boxes and used to do PXE boot. though it was not a supercomp. these clusters 
are considered as supercomps and many enterprises do it that is the reason you 
would see so many people doing supercomps.
7. Now in the era of multicore and co processors like GPU the scaling can be 
better but first we have to break down the problem to identify bittlenecks and 
parallel threads of execution.It needs lot of testing and benchmarking to see 
the result.
8. there are so many things in the kernel like NUMA etc which help better 
performance then an additional processing unit
9. i would like to know how and where GPU is used in software routers as 
mentioned in the write up this would be interesting as software routers now 
days use vitualised enviornment to see better performance .Using GPU will it be 
better then virtual enviornment. 
10. also in packet processing we do lot of parallel processing and in Network 
processors we handle lots of things mentioned in the hardware itself how is GPU 
better then those....

there are lot of things i can go on and on but i think i would stop now :)

Regards,
Abhishek Kumar

 

> Date: Sun, 8 May 2011 23:56:30 +0530
> From: [email protected]
> To: [email protected]
> CC: [email protected]; [email protected]
> Subject: Re: [ilugd] KGPU
> 
> I have done GPU Programming, when i was at CDAC. the performance of ISRO GPU
> based Super Computer is only theoretical. I asked Dr V C V Rao, when i was
> at CDAC, about the performance of GPU based on current CPUs and the hardware
> support. He said to me what GPUs do, they create a number of internal
> parallel threads to solve the calculation and give the response immediately.
> But still their performance is hampered by speed of internetwork, the CPUs
> dividing the job and the type of job to be divided. The real test of the
> performance of a Super Computer can done by following the LINPACK benchmark
> test. Lets see how much performance this super computer gives on this
> benchmark. Dammn the fastest of India is at 47th rank.(God it was on 11th
> rank when i saw it last time). what is happening to this world, everybody
> are making supercomputers. Well why Government of India cares, they don't
> need Supercomputer, as they don't have any use of it.
> 
> On Sun, May 8, 2011 at 11:08 PM, Niyaz <[email protected]> wrote:
> 
> >
> >
> > -----Original Message-----
> > From: A. Mani <[email protected]>
> > Sent: Sunday, May 08, 2011 20:29
> > To: [email protected]; [email protected]
> > Subject: [ilugd] KGPU
> >
> > From http://code.google.com/p/kgpu/
> >
> > KGPU is a GPU computing framework for the Linux kernel. It allows
> > Linux kernel to call CUDA programs running on GPUs directly. The
> > motivation is to augment operating systems with GPUs so that not only
> > userspace applications but also the operating system itself can
> > benefit from GPU acceleration. It can also free the CPU from some
> > computation intensive work by enabling the GPU as an extra computing
> > device.
> >
> > Modern GPUs can be used for more than just graphics processing; they
> > can run general-purpose programs as well. While not well-suited to all
> > types of programs, they excel on code that can make use of their high
> > degree of parallelism. Most uses of so-called ``General Purpose GPU''
> > (GPGPU) computation have been outside the realm of systems software.
> > However, recent work on software routers and encrypted network
> > connections has given examples of how GPGPUs can be applied to tasks
> > more traditionally within the realm of operating systems. These uses
> > are only scratching the surface. Other examples of system-level tasks
> > that can take advantage of GPUs include general cryptography, pattern
> > matching, program analysis, and acceleration of basic commonly-used
> > algorithms; we give more details in our whitepaper. These tasks have
> > applications on the desktop, on the server, and in the datacenter.
> >
> > The current KGPU release includes a demo of GPU augmentation: a
> > GPU-accelerated AES cipher, which can be used in conjunction with the
> > eCryptfs encrypted filesystem. This enables read/write bandwidths for
> > an ecrypted filesystem that can reach a factor of 3x ~ 4x improvement
> > over an optimized CPU implementation (using a GTX 480 GPU).
> >
> > KGPU is a project of the Flux Research Group at the University of
> > Utah. It is supported by NVIDIA through a graduate fellowship awarded
> > to Weibin Sun.
> > More
> >
> > We have a short whitepaper describing the motivation and design of KGPU.
> >
> > The idea behind KGPU is to treat the GPU as a computing co-processor
> > for the operating system, enabling data-parallel computation inside
> > the Linux kernel. This allows us to use SIMD (or SIMT in CUDA) style
> > code to accelerate Linux kernel functionality, and to bring new
> > functionality formerly considered too compute intensive into the
> > kernel. Simply put, KGPU enables vector computing for the kernel.
> >
> > It makes the Linux kernel really parallelized: it is not only
> > processing multiple requests concurrently, but can also partition a
> > single large requested computation into tiles and spread them across
> > the large number of cores on a GPU.
> >
> > KGPU is not an OS running on GPU; this is practically impossible
> > because of the limited functionality of current GPUs.
> >
> >
> > _____________________________________________________________________________________________
> >
> >
> >
> >
> > Best
> >
> > A. Mani
> >
> >
> >
> > --
> > A. Mani
> > ASL, CLC,  AMS, CMS
> > http://www.logicamani.co.cc
> >
> > _______________________________________________
> > Ilugd mailing list
> > [email protected]
> > http://frodo.hserus.net/mailman/listinfo/ilugd
> >
> >
> >
> > Thank you mr Mani for sharing the information. I read a few days back about
> > a GPUs based super computer developed in  our country.
> > Can we expect GPUs replacing CPUs in future?
> >
> > Regards,
> > Niyaz
> >
> > _______________________________________________
> > Ilugd mailing list
> > [email protected]
> > http://frodo.hserus.net/mailman/listinfo/ilugd
> >
> 
> 
> 
> -- 
> VIDYA BHUSHAN SINGH
> _______________________________________________
> Ilugd mailing list
> [email protected]
> http://frodo.hserus.net/mailman/listinfo/ilugd
                                          
_______________________________________________
Ilugd mailing list
[email protected]
http://frodo.hserus.net/mailman/listinfo/ilugd

Reply via email to