George, your implementation isn't cludgy by default. It's only slow and cludgy if the processes involved aren't tuned to function with file spooling. The basic design of MVWWW utilizes file spooling much in the same way. I get very good throughput using it. It is not anywhere as efficient as pipes, direct sockets, or shared memory, but it is portable and simple to design around. As with any software design, overall performance is based on the hardware running it. If you have a very slow file system, then your file-based solution is going to really drag. On the flipside, try running a ton of IPC based processes on a system that is heavily swapping to disk. If you fully understand an architecture and what it requires, then there is no reason why it can't outperform a poorly implemented "better design". With that said, it's very hard to compare apples to apples, when one of them is apple sauce and the other is juice. I run FlashCONNECT and have been happy with it, with regards to processing time. Unfortunately, it still lacks proper phantom management. My error log still shows many 505 errors when processes get overstepped. To compensate for a large amount of errors, I run a couple more phantoms than is needed. That eats up my seat count, but I don't have any other options. I still see 505 errors pop up now and then. MVWWW may be 1/2-3/4 second slower to respond, but it's impossible for phantoms to step over each other. Phantoms can be added and removed at will, without any service problems arising on the client side. When it's been thoroughly tested and proven, I may be migrating my code. There's always a trade off, when it comes to architectures.
This is a general response and probably useless info. I'm no expert, but this is a small tech niche I've been playing in for a long time now. Performance, in regards to architecture, is based on several things. First, there is the client (CGI in this case). Secondly, there is the DB<->client bridge. Thirdly, there is the backend application (U2 in this case). How each of these components is designed and implemented determines the performance and flexibility of the architecture. In most cases, flexibility deters performance. The opposite is also true. You can, however, balance the two and get great performance without sacrificing flexibility. Most fast performers utilize a real-time connection across the DB bridge. The CGI client makes a local service connection via IPC(inter-process communication). The local service is already connected across the DB bridge, to a backend service. Therefore, the request time is near zero. Client requests return content to the browser as fast as the backend application can complete. If content monitoring and control is put in place, then performance will decrease. If a request for an invalid app comes in overly often from the same IP, then that IP should be temporarily blocked from accessing port 80,443,8080,etc. Some fast performers remove the DB bridge completely and offer direct client to backend connections. While these are the quickest, they may lack control and recovery capability. A DoS(denial of service) attack can flood the web services and backend to a halt. A good direct solution should always have load control(phantom management) and reliable content monitoring. Flexible implementations can utilize many techniques to handle processing. Most of them utilize modular components to handle each step of the process individually. When you get into complex architectures such as these, the performance is determined by the design of each component and also how well the components work together. A positive aspect of modular setups is the ability to monitor and control content between steps. You don't need an integral content manager. Plug-ins are a breeze to implement. The key to reliable modular implementations is reliance upon well known pre-built tools and services. For example, writing your own operating system socket service can lead to system stability issues. This is especially true if you don't know what you're doing. Also, relying on one specific component to handle most of the work can lead to load problems and possible system stability issues. Launching a U2 shell directly from a web server is an example of dangerous modular implementation. A good rule of thumb is: If you don't fully understand the good/bad of what is being done then don't do it. I don't know the architecture of RedBack, so I can't categorize it. Can someone provide some info on that? CGI is not a method, language, or API. CGI is a "common gateway interface". This standard provides a set of operating system environment variables, much like "TERM" and "PATH". Web based services can obtain information indirectly from the web server, using these environment variables and also 'standard input'. CGI environment variables and input content are available to all applications that can be executed from the web server. A batch file could conceivably obtain CGI data and send content to 'standard output'. That's not advisable though. Languages such as C, Perl, Python, PHP, Ruby, and even Bash are all known for CGI client development. The smarter the CGI client is, the less worries you have on the other side. As far as 'cludgy' architectures, take a look at LPR. It's not a very pretty spooler, but it is quite fast and reliable. I've studied and conceived many web-interfacing architectures over the past 6 years. I've written and tested out several architectures on my own. I've learned far more than I wanted to about TCP/IP, threading, memory heaps, and the differences between Winsock and *real sockets*. The MVWWW project is an attempt at producing a fast and portable, yet flexible HTTP layer solution that everyone can learn on. Does it replace commercial solutions? Nope. It's not even close to that caliber(yet). I've gotten some positive feedback on the project so far, but it's still in the beginning stages. I have big tandem project plans for it though. A generic Pick XML service, using WSDL, is currently in design and development. Btw, I have released a Win32 binary and VC++6.0 project for the spooler. Get it under CVS Web or anonymous CVS. It haven't tested it yet, so that's why it's only in CVS. I also posted a how-to install guide for the Windows spooler. Look under the project Docs. In the end, the best solution is either: 1) The long-lived commercial solution you can afford, that also provides the security, features, and support you need. 2) The free solution you can _properly_ build yourself, that doesn't inherit security and stability issues. Oh.. you'll need to support it yourself too. 3) The bundled non-commercial solution that follows the 'guidelines' I wrote about. You'll also need to support this yourself. If you don't understand the components, then don't bother with it. -Glen PickSource: http://picksource.com MVWWW: http://mvwww.mvdevcentral.com > -----Original Message----- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of George Gallen > Sent: Wednesday, February 23, 2005 11:09 AM > To: [email protected] > Subject: RE: [U2] U2 to web software > > > Brian, > > >At the cludgy end you could use CGI to write a script (select > >your language > >here) to write request information to a directory, have a > >phantom scan it > >and write the response back, and then have the CGI script send > >that back to > >the web server. Very simple, but pretty slow and limited in practice. > > > > The above method is what I use at present. (it works and management > doe$n't to change method$). Cludgy is a good word. > > What I don't understand and maybe someone can explain. Why is > this method slow as compared to Redback? Not saying Redback isn't > faster...Just curious why? Is it that much faster? > > and what limits do you feel it has in practice (the cgi method that is). > > George > ------- > u2-users mailing list > [email protected] > To unsubscribe please visit http://listserver.u2ug.org/ ------- u2-users mailing list [email protected] To unsubscribe please visit http://listserver.u2ug.org/
