RE: CVS
Ken, I am using Rcs.pm in production. Could you give me more details about the flaws you have found and if possible could you post the patch(or code change)? Thanks, -Niraj -Original Message- From: Ken Y. Clark [mailto:[EMAIL PROTECTED]] Sent: Thursday, November 15, 2001 9:43 AM To: Jonathan M. Hollin Cc: mod_perl Mailing List Subject: Re: CVS On Thu, 15 Nov 2001, Jonathan M. Hollin wrote: > Date: Thu, 15 Nov 2001 14:31:57 - > From: Jonathan M. Hollin <[EMAIL PROTECTED]> > To: mod_perl Mailing List <[EMAIL PROTECTED]> > Subject: CVS > > Hi people, > > I am currently developing a content management system under mod_perl, with > data stored in an RDBMS (MySQL at present, but Oracle on the production > server). > > I would like to add version control to published documents (read pages) and > wondered if anyone has any experience of this who would be willing to offer > me some advice. I have a CVS server and am curious as to whether there is > some way this can used (bearing in mind that I want to manage DB data, not > files). I would like to be able to rollback to any previous version (if > possible), and would also like to document the different versions > themselves. > > I'm thinking that I could maybe commit the database files to CVS and then > use a module to communicate with the CVS server (Apache-CVS, VCP, VCS-CVS, > etc). Is this possible? Has anyone ever tried anything like this? > > I have searched CPAN and used Google to search the web and Usenet but have > so far drawn a blank. > > I suspect that I will not be able to use CVS in this manner and that > therefore I am going to have to "roll my own". If this does turn out to be > case - can anyone lend me any guidance as to how I work out what's changed > in a record (between versions)? Then I can just store the changes in a DB > as required. > > Jonathan M. Hollin - WYPUG Co-ordinator > West Yorkshire Perl User Group > http://wypug.pm.org/ > Jonathan, I worked on a system earlier this year that had a need for revision control of files. I decided to use RCS and the Rcs.pm Perl module. The Rcs.pm module actually had several flaws which I tried to communicate to the author, but I never heard from him. However, with my fixes, I found using RCS to be perfectly adequate for my needs. I interacted with a database, as well (MySQL), but only to store the file's location and some meta-data on the file. I really enjoyed using RCS, allowing it to handle the manipulation of the files. Personally, I didn't feel I could roll anything better than RCS, though you may feel different about replicating CVS's functionality. ky LEGAL NOTICE Unless expressly stated otherwise, this message is confidential and may be privileged. It is intended for the addressee(s) only. Access to this E-mail by anyone else is unauthorized. If you are not an addressee, any disclosure or copying of the contents of this E-mail or any action taken (or not taken) in reliance on it is unauthorized and may be unlawful. If you are not an addressee, please inform the sender immediately.
RE: [BUG] $r->subprocess_env() leaking to %ENV
FYI http://forum.swarthmore.edu/epigone/modperl/quajugrar/DDC7EF25B9D6D311A2 [EMAIL PROTECTED] -Original Message- From: Dominique Quatravaux [mailto:[EMAIL PROTECTED]] Sent: Tuesday, July 17, 2001 2:24 PM To: [EMAIL PROTECTED] Subject: [BUG] $r->subprocess_env() leaking to %ENV Hello, I found the following behaviour in mod_perl, which is clearly unexpected: I use an Apache that can serve both mod_perl and PHP pages. I run it -X ; first I request the URI for this scriptlet (under mod_perl's Apache::Registry): #!/usr/bin/perl $ENV{"LEAKING"}="oops"; Then I visit a PHP page that contains just phpinfo() - the LEAKING variable is passed to it, I can see it in the HTTP_ENV_VARS. So far, so good - a remanent environment may not be what I want, but it certainly is not a bug. It becomes worse with a module that sets things in the CGI namespace ($r->subprocess_env()), such as mod_ssl : if I request the Perl scriptlet through SSL, all variables in the subprocess_env (HTTPS, SERVER_CERTIFICATE et al., that normally show up as HTTP_SERVER_VARS in phpinfo()) are somehow promoted as environment variables, and become remanent in the server process environment as shown above (i.e. they now appear as HTTP_ENV_VARS for subsequent requests). This seems to happen at the time the first (I didn't test more) request to an Apache::Registry script is made in the life of an httpd : requesting phpinfo() pages beforehand doesn't show any spurious environment data, even through SSL. Using a debugger shows (as expected from %ENV modifications) that the real environment (char **__environ) is modified in both scenari, so this is not due to a bug in PHP. I ended up writing a cleanup handler that restores %ENV to what it was at server startup time, and everything works fine now. Platform: RH 7.1, mod_perl 1.24_01, Apache 1.3.19, PHP 4.0.4pl1. Sorry I cannot investigate any further by myself, because I know next to nothing in XS and mod_perl uses it heavily. I just saw that %ENV is tied by C code, but I hear it is related to tainting and I don't see how this could cause the observed behaviour. -- << Tout n'y est pas parfait, mais on y honore certainement les jardiniers >> Dominique Quatravaux <[EMAIL PROTECTED]> LEGAL NOTICE Unless expressly stated otherwise, this message is confidential and may be privileged. It is intended for the addressee(s) only. Access to this E-mail by anyone else is unauthorized. If you are not an addressee, any disclosure or copying of the contents of this E-mail or any action taken (or not taken) in reliance on it is unauthorized and may be unlawful. If you are not an addressee, please inform the sender immediately.
RE: flushing appears to be broken with perl 5.6.0
Stas, I am printing 4k of data for each push .. # this to flush buffer of front end proxy-server. my $new_line = "\n" x 4096; print $new_line; (make sure gzip filter if off ...) http://forum.swarthmore.edu/epigone/modperl/kerdsnestim/14702.7611.496757.13 [EMAIL PROTECTED] I am sure there are some other more efficient solutions which I don't know, but would like to know. -Niraj -Original Message- From: Stas Bekman [mailto:[EMAIL PROTECTED]] Sent: Monday, January 08, 2001 11:18 AM To: mod_perl list Subject: Re: flushing appears to be broken with perl 5.6.0 On Mon, 8 Jan 2001, Stas Bekman wrote: > Hi, > > This simple Apache::Registry script is supposed to print the PID and then > hang, it used to work with older mod_perl/perl versions, it doesn't print > the PID now -- rflush doesn't seem to work. (neither $|=1 works) > > my $r = shift; > $r->send_http_header('text/plain'); > > $r->print("PID = $$\n"); > $r->rflush; > > while(1){ > sleep 1; > } > > I've tested it with mod_perl-1.24_(01|02)/apache-1.3.14 and > mod_perl-1.24/apache-1.3.12 with perl 5.6.0 (running on Linux). > > Has it something to do with bugs in 5.6.0? If you have the patched version > of 5.6.0 can you please test it? As pointed out by Niraj Sheth in the private reply I had a problem with front-end buffering. Accessing the back-end server directly solves the problem. Which leads to a question, on how to make the real flush. Currently rflush and ($|=1) are quite useless if there is a buffering process at the front end side. Thanks. _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://apachetoday.com http://logilune.com/ http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
RE: env in background process
Thanks for looking at it. I prefer "%ENV = ();" in PerlCleanupHandler handler. (as i don't have to modify so many scripts). I don't think it has any negative effect ... -Niraj > -Original Message- > From: Doug MacEachern [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, September 26, 2000 3:32 PM > To: Niraj Sheth > Cc: [EMAIL PROTECTED] > Subject: RE: env in background process > > > On Tue, 15 Aug 2000, Niraj Sheth wrote: > > > so why dump_env is getting both? > > If I either uncomment "local %ENV = %ENV;" in script or put > "%ENV = ();" > > in PerlCleanupHandler then dump_env is working fine. > > I tried both Apache::PerlRun and Apache::Registry which same result. > > oh whoops, you did send a test case. i think the problem is > that mod_perl > only clears the Perl side of %ENV, so the underlying C > environ array is > not modified. if we do that than other problems pop up. the best > approach for the moment is to use: > > local $ENV{FOO1} = 'foo1'; > > print `dump_env`; >
RE: env in background process
Didn't get any reply yet on this, so I think i am doing something very stupid ... Can anyone try it and tell me if gets the same result? Thanks, Niraj > -Original Message- > From: Niraj Sheth [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, August 15, 2000 11:03 AM > To: [EMAIL PROTECTED] > Subject: RE: env in background process > > > Follow up on this. > > script1.pl(set FOO1 env) > === > #!/usr/local/bin/perl > > print "Content-type: text/html\n\n"; > print "PID = $$\n"; > print "SCRIPT1 with FOO1\n"; > > #local %ENV = %ENV; > > $ENV{FOO1} = "foo1"; > print map { "$_ = $ENV{$_}\n"; } sort keys %ENV; > > $command = "dump_env"; > print `$command &`; # put it in the background > > -- end > > script2.pl(set FOO2 env) > === > #!/usr/local/bin/perl > > print "Content-type: text/html\n\n"; > print "PID = $$\n"; > print "SCRIPT1 with FOO2\n"; > > #local %ENV = %ENV; > > $ENV{FOO2} = "foo2"; > print map { "$_ = $ENV{$_}\n"; } sort keys %ENV; > > $command = "dump_env"; > print `$command &`; # put it in the background > > -- end > > dump_env > === > #!/usr/local/bin/perl > > print "$0 @ARGV\n"; > > print map { "$0 $_ = $ENV{$_}\n"; } sort keys %ENV; > > --end > > running "httpd -X" i will get FOO1 and FOO2 both from the print > statement of dum_env. > while script1.pl is ONLY printing FOO1 which is correct as well as > script2.pl is ONLY printing FOO2 which is also correct. > > so why dump_env is getting both? > If I either uncomment "local %ENV = %ENV;" in script or put > "%ENV = ();" > in PerlCleanupHandler then dump_env is working fine. > I tried both Apache::PerlRun and Apache::Registry which same result. > > I would appreciate any help. > > -Niraj > > > -Original Message- > > From: Niraj Sheth [mailto:[EMAIL PROTECTED]] > > Sent: Monday, August 14, 2000 12:10 PM > > To: [EMAIL PROTECTED] > > Subject: env in background process > > > > > > Hi, > > > > I am having very strange problem with environment variables. > > > > >From Apache::PerlRun script(cgi) I am setting env and firing > > background > > process .. > > system("$command &") (or print `$command &`;) > > > > now looks like environment variable being persistence b/w different > > requests ONLY in background process. so it's looks to me > that mod_perl > > > is setting proper "Perl Level" env but failing to reset env > > at "c level" > > or "process level". I know it's sounds very weird. > > /perl-status?env is printing correctly but my background process > > ($command) is printing few extra env, which i set it in > different cgi > > script. > > > > e.g. "script1.pl" is setting $ENV{foo1} = "foo1" and firing print > > `command1 &`; > > and "script2.pl" is setting $ENV{foo2} = "foo2" and firing print > > `command2 &`; > > > > after few hits both env(foo1 and foo2) are visible to both > background > > processes. > > while /perl-status?env is displaying correctly. > > Here command1 and command2(perl scripts) are just printing env > > > > Apache/1.3.9 (Unix) mod_perl/1.21 > > Summary of my perl5 (5.0 patchlevel 5 subversion 3) configuration: > > Platform: > > osname=solaris, osvers=2.6, archname=sun4-solaris > > uname='sunos nlsun268 5.6 generic_105181-14 sun4u sparc > > sunw,ultra-4 > > ' > > hint=recommended, useposix=true, d_sigaction=define > > usethreads=undef useperlio=undef d_sfio=undef > > Compiler: > > cc='gcc', optimize='-O', gccversion=2.8.1 > > cppflags='-I/usr/local/include' > > ccflags ='-I/usr/local/include' > > stdchar='unsigned char', d_stdstdio=define, usevfork=false > > intsize=4, longsize=4, ptrsize=4, doublesize=8 > > d_longlong=define, longlongsize=8, d_longdbl=define, > > longdblsize=16 > > alignbytes=8, usemymalloc=y, prototype=define > > Linker and Libraries: > > ld='gcc', ldflags =' -L/usr/local/lib' > > libpth=/usr/local/lib /lib /usr/lib /usr/ccs/lib > > libs=-lsocket -lnsl -ldl -lm -lc -lcrypt > > libc=/lib/libc.so, so=so, useshrplib=false, libperl=libperl.a > > Dynamic Linking: > > dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' ' > > cccdlflags='-fPIC', lddlflags='-G -L/usr/local/lib' > > > > > > Any comments? > > > > Thanks, > > Niraj > > >
How do I make proxy server not to buffer o/p of mod_perl server?
Hi All, How do configure front end proxy server NOT to buffer o/p of mod_perl server? ProxyReceiveBufferSize doesn't help. I read thru Eric Cholet's post, he pointed out HUGE_STRING_LEN (8k) is the buffer size. Is it configurable, may be in latest 1.3.12 or i have to change manually and recompile apache? Thanks, Niraj
RE: pool of DB connections ?
Have you looked at "Perl Cookbook"? It has nice discussion on prefork server. you can customize it according to your requirement. e.g. You can control exactly how many DB connection you want(background processes which keep persistance connection to database). You can move this to another server if you want or use as Unix Domain (as it's very fast compare to TCP/IP socket). Niraj -Original Message- From: Leslie Mikesell [mailto:[EMAIL PROTECTED]] Sent: Monday, November 29, 1999 11:00 AM To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: pool of DB connections ? According to Oleg Bartunov: > I'm using mod_perl, DBI, ApacheDBI and was quite happy > with persistent connections httpd<->postgres until I used > just one database. Currently I have 20 apache servers which > handle 20 connections to database. If I want to work with > another database I have to create another 20 connections > with DB. Postgres is not multithreading > DB, so I will have 40 postgres backends. This is too much. > Any experience ? Try the common trick of using a lightweight non-mod_perl apache as a front end, proxying the program requests to a mod_perl backend on another port. If your programs live under directory boundaries you can use ProxyPass directives. If they don't you can use RewriteRules with the [p] flag to selectively proxy (or [L] to not proxy). This will probably allow you to cut the mod_perl httpd's at least in half. If you still have a problem you could run two back end httpds on different ports with the front end proxying the requests that need each database to separate backends. Or you can throw hardware at the problem and move the database to a separate machine with enough memory to handle the connections. Les Mikesell [EMAIL PROTECTED]
RE: Urgent--Any limit on Hash table in Shared Memory?
(for solaris, don't know much about other os) /etc/system set shmsys:shminfo_shmmax= set shmsys:shminfo_shmmni= set shmsys:shminfo_shmseg= (kernel load this at boot time). shmmax is maximum amount of memory you can request ... shmmni is the maximum number of shared-memory id available in system shmseg is the maximum number of shared-memory id per process.. (looks like shmseg is 127) If needed also modify Shareable.pm for SHM_BUFSIZ to higher value ... make sure you have enough ... also try sysdef to see settings .. Hope this helps. Niraj -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] Sent: Thursday, October 07, 1999 3:51 PM To: [EMAIL PROTECTED] Subject: Urgent--Any limit on Hash table in Shared Memory? I used IPC::shareable module to construct a nested HASH table in shared memory. It worked fine during "on-demand" test. When I move from "on-demand" to "preload", An error came up saying that "No space left on device". The machine has 0.5GB menory and most is still available. Each entry in the Hash table is less than 1K. The logfile I printed out during Httpd start indicates that each time, this error shows up when the 128th entry is constructed. So I wonder whether there is any limitation on Shared Hash table and whether there is a way around this problem. Right now, each buffer_size is set to SHMMAX (32M). total 4096 seg is allowed. Another quick question is that whether there is a way to get info on how many share memory segments has been allocated and how many physical memory still available? Any suggestion is appreciated! -Martin