Re: macperl-anyperl Digest 9 Dec 2011 11:43:02 -0000 Issue 112
From: Sean Murphy mhysnm1...@gmail.com Date: 平成23年12月9日 20:42:49:JST To: macperl-anyp...@perl.org Subject: DBI under the MAC Hi All. I am having some strange behaviours. I have created a simple script to insert data into a SQLite database. When I use the prepare are statement. The DBI driver complains that there is no such table. The code looks like thus: #!/usr/bin/perl # Combining credit and savings sheets # to find expenses and income. use strict; use DBI; my $db_driver = SQLite; my $db_file = budget.db; my $dns = DBI:$db_driver:database=$db_file; my $dbh = DBI-connect ($dns, '', '', { RaiseError = 1, AutoCommit = 0}); my $sth1 = $dbh-prepare(insert into cat (name) values (?);) or die(Cannot prepare table: . DBI::errstr() ); my $sth2 = $dbh-prepare(insert into trans (accounts, transaction_date, description, amount, amount_type, transaction_type, serial, category_id) values (?, ?, ?, ?, ?, ?, ?, ?);) or die(Cannot prepare: . DBI::errstr() ); When the above is executed in the full script. We get the following error: DBD::SQLite::db prepare failed: table trans has no column named accounts at ./insert_budget.pl line 60. Cannot prepare: table trans has no column named accounts at ./ insert_budget.pl line 60. The Schema for the table shows trans being present. As follows: sqlite .schema trans CREATE TABLE trans (transaction_id int primary key, account int, transaction_date date, description varchar(80), amount decimal (11,2), amount_type varchar(3) not null, transaction_type varchar (40), serial varchar(40), category_id int); sqlite Any ideas what might be going on here? The drivers are being found. I have tested this by using the perl -d option with the script. Sean From the content of your question, I'm guessing that you're working with Mac OS X and therefore non-MacPerl perl. This list is for MacPerl, which is perl on the old pre-Mac OS X Mac systems. The list you probably want is at macosx@perl.org . I really shouldn't forward this to that list, but I will. As far as your question is concerned, I'm not familiar with SQLite, but it looks to me like your problem is not Mac OS X related. You might want to check your table definition again. Joel Rees (waiting for a 3+GHz ARM processor to come out, to test Tim's willingness to switch again.)
perl and apple mail?
Are any of you using perl plugins with apple's mail browser? What I'm thinking of is something along the lines of supplementing the overly-simple rules in apple mail with some perl re. Things like funneling all the broken header junk off the top and into one folder, catching the real sharpos that like to put my mail address in the return address of the junk mail and putting that in a separate folder, that kind of stuff. I'm sure it could be done with the scripting api, but I'm always all thumbs with applescript. I've been looking around the web, and there are tantalizing clues, but not enough for a lame-brain like myself to grab hold of. If anyone is doing something like this, can you hit me with a cluestick? Joel Rees (waiting for a 3+GHz ARM processor to come out, to test Steve's willingness to switch again.)
Re: interaction between tr and s (was Re: tr question -- probably wrong list to ask, but ...)
For the record -- Is UTF-8 input coming from the likes of Apache a possible source of failure? Pack may need to allow for endian-ness of a specific machine. Well, it depends on how one looks at things, perhaps. I think one of the probable reasons for the failure in the DWIM machinery was that I am insisting on using shift-JIS characters in the source file instead of utf-8 in strings and comments. But, no, Apache wasn't filtering shift-JIS to utf-8 for me. Byte order also was not the problem. After several hours of analysis (using more of the stuff that made the original posting of the source somewhat opaque), I determined that the problem derived from perl sometimes being stricter about shift-JIS than I wanted it to be. I don't know why the '+' substitute for space would switch to strict character interpretation, but it seems to have been doing so. Shift-JIS is a variable byte width encoding, one or two bytes. Lead bytes are inherently not valid as single-byte characters. Trailing bytes are sometimes valid as single-byte characters and sometimes not. If the regular expression engine is not checking for valid bytes, all you have to do is string the decoded bytes together. But if it is checking for valid bytes, you have to put the decoded bytes into something other than a char. (Blame C for folding the type of a byte onto the type of a character.) But if you are collecting into 16-bit words, you have to actually check for the lead bytes yourself. I'm sure someone could put an RE together that would do it, but I just decided it was going to be simpler to check and build the string by hand. So, for anybody who's curious, here's what I'm doing for now: - my $qString = $ENV{'QUERY_STRING'}; my @list = split( '', $qString, 10 ); my %queries = (); foreach my $pair ( @list ) { my ( $key, $value ) = split( '=', $pair, 2 ); # Really should just give in and use CGI. # $key =~ tr/+/ /; # You don't expect space in identifiers, but, ... $key =~ s/%([\dA-Fa-f][\dA-Fa-f])/pack (C, hex ($1))/eg; # $queries{ $key . '_' } = $value; # dbg $value =~ tr/+/ /; my ( $byteAccm, $hexAccm, $conv ) = ( 0, undef, '' ); while ( $value =~ m/%([\dA-Fa-f][\dA-Fa-f])|(.)/g ) { if ( defined ( $1 ) ) { my $hexValue = $1; my $decValue = hex ( $hexValue ); if ( ! defined ( $hexAccm ) ) { if ( $decValue = 0x80 || ( $decValue = 0xa0 $decValue 0xe0 ) || $decValue = 0xfd ) { $conv .= pack( 'C', $decValue ); } else# Lead byte -- loose checks all around. { $byteAccm = $decValue; $hexAccm = $hexValue; } } else { # if ( $decValue = 0x40 || ( $decValue 0xa0 $hexValue 0xe0 ) ) $conv .= pack( 'S', ( $byteAccm 8 ) + $decValue ); $byteAccm = 0; $hexAccm = undef; } } else { my $cValue = $2; my $decValue = ord ( $cValue ); if ( ! defined ( $hexAccm ) ) { $conv .= $cValue; } else { # if ( $decValue = 0x40 || ( $decValue 0xa0 $hexValue 0xe0 ) ) $conv .= pack( 'S', ( $byteAccm 8 ) + $decValue ); $byteAccm = 0; $hexAccm = undef; } } } $queries{ $key } = $conv; } - If this were production code, I should check some more gaps in the lead byte (and check where the newest JIS adds the extra several thousand characters) and uncomment the checks on the trailing bytes (and add some trailing byte checks specific to certain lead bytes, geagh). But then I have to figure out what to do with bad bytes. Joel Rees (waiting for a 3+GHz ARM processor to come out, to test Steve's willingness to switch again.)
interaction between tr and s (was Re: tr question -- probably wrong list to ask, but ...)
Okay, given the following (without all the debugging code I had in earlier): # The code that grabs the parameters: my $qString = $ENV{'QUERY_STRING'}; my @list = split( '', $qString, 10 ); my %queries = (); foreach my $pair ( @list ) { my ( $key, $value ) = split( '=', $pair, 2 ); $key =~ s/%([\dA-Fa-f][\dA-Fa-f])/pack (C, hex ($1))/eg; $value =~ tr/+/ /; $value =~ s/%([\dA-Fa-f][\dA-Fa-f])/pack (C, hex ($1))/eg; $queries{ $key } = $value; } Anyone know why commenting out the transliteration will recover the shift-JIS characters from the url-encoded stream (leaving spaces as '+', of course), but leaving the transliteration in will induce the code to drop shift-JIS lead bytes and every now and then whole characters? I had a similar problem with $value =~ s/\+/ /g; but it was an intermittent problem. (Haven't tried it today to see whether it only kills the shift-JIS characters when there is 8-bit space in the stream, but that may have been what was happening.) Joel Rees (waiting for a 3+GHz ARM processor to come out, to test Steve's willingness to switch again.)
Okay, it's not tr after all.
On 平成 19/12/01, at 12:04, Chas. Owens wrote: On Nov 30, 2007 9:43 PM, Joel Rees [EMAIL PROTECTED] wrote: I guess it would help if I posted my code and what it puts out. snip Whoa, way to much information. :-) Try to reproduce your issue with the least amount of code and data. Actually, after cleaning out some of the extraneous stuff, I can see it is not tr/// doing the dirty deed after all (which is a relief, in a way): chatter_0 this+is+a+test.+%82%B1%82%EA%82%CD%83e%83X%83g%82%C5%82% B7%81B chatter_tr1 this is a test. %82%B1%82%EA%82%CD%83e%83X%83g%82%C5%82% B7%81B chatter_tr this is a test. アヘeXgナキB chatter this+is+a+test.+これはテストです。 Even though stripping the '+' out forces what had been intermittent behavior, I can see that tr/// is doing its job right. Off hand I would say your problem is probably with the encoding of your data (and Perl's lack of knowledge about it). Try using the locale or encoding pragmas. Yeah, I'll have to go back to that black magic. Thanks everybody for being listening ears. Joel Rees (waiting for a 3+GHz ARM processor to come out, to test Steve's willingness to switch again.)
Re: tr question (probably wrong list to ask, but ...)
function_tr eキ # results-extract-hexdump-lined-up- 63 68 61 74 74 65 72 09 |chatter.| 0008 74 68 69 73 2b 69 73 2b 61 2b 74 65 73 74 2e 2b |this+is+a +test.+| 0018 82 b1 82 ea 82 cd 83 65 83 58 83 67 82 c5 82 b7 81 42 0a |...e.X.g.B.| 002b 63 68 61 74 74 65 72 5f 74 72 09 |chatter_tr.| 0036 74 68 69 73 20 69 73 20 61 20 74 65 73 74 2e 20 |this is a test. | 0046 b1 cd 65 58 67 c5 b7 42 0a |..eXg..B.| 004f 66 75 6e 63 74 69 6f 6e 09 |function.| 0058 93 8a 8d 65 82 b7 82 e9 0a |...e.| 0061 66 75 6e 63 74 69 6f 6e 5f 74 72 09 |function_tr.| 006d 65 b7 0a |e..| # results-extract-hexdump-straight- 63 68 61 74 74 65 72 09 74 68 69 73 2b 69 73 2b | chatter.this+is+| 0010 61 2b 74 65 73 74 2e 2b 82 b1 82 ea 82 cd 83 65 |a+test. +...e| 0020 83 58 83 67 82 c5 82 b7 81 42 0a 63 68 61 74 74 |.X.g.B.chatt| 0030 65 72 5f 74 72 09 74 68 69 73 20 69 73 20 61 20 | er_tr.this is a | 0040 74 65 73 74 2e 20 b1 cd 65 58 67 c5 b7 42 0a 66 | test. ..eXg..B.f| 0050 75 6e 63 74 69 6f 6e 09 93 8a 8d 65 82 b7 82 e9 | unctione| 0060 0a 66 75 6e 63 74 69 6f 6e 5f 74 72 09 65 b7 0a |.function_tr.e..| # results-end- 0018 82 b1 82 ea 82 cd 83 65 83 58 83 67 82 c5 82 b7 81 42 0a |...e.X.g.B.| 0046 b1 cd 65 58 67 c5 b7 42 0a |..eXg..B.| and 0058 93 8a 8d 65 82 b7 82 e9 0a |...e.| 006d 65 b7 0a |e..| tell the tale. Okay, so it looks like it isn't just stripping the lead bytes, every now and then I'm losing a full JIS character. Joel Rees (waiting for a 3+GHz ARM processor to come out, to test Steve's willingness to switch again.)
tr question (probably wrong list to ask, but ...)
This is probably the wrong list for this question, but is anyone willing to give me a clue why $line =~ tr/+/ /; would clip out the lead bytes of a shift-JIS string in a cgi script? Come to think of it, I think it's being applied while the string is still hex-encoded, so it makes even less sense to me. (I know, I should be letting the CGI module decode the url-encoded string. But I seem to be mis-understanding something fundamental here. Which is why a newbies list would probably be better for this question.) Joel Rees (waiting for a 3+GHz ARM processor to come out, to test Steve's willingness to switch again.)
Re: Locale weirdness
Responding without thinking, but, On 平成 19/10/24, at 4:44, David Cantrell wrote: As some of you may know, I'm one of the cpan-testers. I recently sent a test failure for Log-Report-0.11 on OS X. The author is most puzzled about what's happening, and once I gave him a guest account he could play with, he found that ... What I found out, is that locale -a says that nl_NL exists, and /sw/share/locale/nl/glibc.mo is present. However, LANG=nl ls /xx is still in English. Don't know why. This seems rather odd. Anyone know what's going on? My memory is that Apple is not using the same locale mechanisms as most of the rest of the *nix world. [clickety-clackety] Hmm. Yeah, the LANG environment variable is not set in the default shell in my family account (Mac OS 10.4), which has Japanese at the top of the language list in the system preferences. I never have yet bothered figuring out why/how Mac OS makes the foreign language stuff work. (Not much interested, any more.) -- David Cantrell | Cake Smuggler Extraordinaire Repent through spending Joel Rees (waiting for a 3+GHz ARM processor to come out, to test Steve's willingness to switch again.)
Re: Leopard Perl version...
On 平成 19/10/16, at 19:56, David Cantrell wrote: On Sun, Oct 14, 2007 at 11:32:09AM -0700, Edward Moy wrote: So software updates are restricted to keep the size down. Because most users do not use the command-line or develop software, updates to command-line programs never make the cut (developer software has it own update channel). This makes perfect sense. Is it possible to add this seperate channel to Software Update? My understanding is that it is what you might call a manual channel. (Which is the way I prefer it even if it sometimes seems inconvenient.) Joel Rees (waiting for a 3+GHz ARM processor to come out, to test Steve's willingness to switch again.)
Re: Leopard Perl version...
*ahem* Go back and read Mr Moy'ss email address. He may be in a position to answer this question definitively. :-) FWIW, I didn't see Ed saying that it had been 5.8.8 in _10.4_ for quite a while. (His mode of expression was, admittedly, a bit ambiguous.) But, unless, 10.4 is tracking a different version between uNTEL and PPC, my system perl is 5.8.6 with all the current 10.4 updates installed. Joel Rees (Yeah, contrary to brags I made when Apple switched, I haven't had the money to move my home server to openbsd yet. First it was the Japanese input method in Fedora Core wasn't good enough to ask my wife to use the FC box. That's up to snuff, now, but I find that the Mac fills some of the gaps that the Linux box doesn't cover, letting me share data with the MSWorld. Now I'm waiting for an ARM processor that runs at 3 GHz to come out, so Jobs will find himself faced with the question of switching again. In my dreams?)
Re: Can't Install DBD::mysql
On 平成 19/06/18, at 6:37, Lola J. Lee Beno wrote: Enrique Terrazas wrote: I installed the intel version of mysql (mysql-5.0.41-osx10.4- i686.dmg) which went fine. I then tried to install DBD::mysql with the following parameters: perl Makefile.PL --testdb=test --testuser=testuser -- testpassword=password --libs -L/usr/local/mysql/lib --cflags -I/ usr/local/mysql/include and got the following errors ... help! Yep . . . had the same problem. Fortunately, someone sent me this info: http://jayallen.org/journey/2006/04/dbd-mysql-build-problems-on- mac-book-pro Hope this helps. I haven't yet gotten this to work because of this - how do you terminate a multi-line Perl comment like this Multi-line ommand? sudo perl Makefile.PL \ --cflags=-I/usr/local/mysql/include -Os -arch i386 -fno-common \ --libs=-L/usr/local/mysql/lib -lmysqlclient -lz -lm properly in Terminal? I'm thinking that the lack of backslash terminates the command line. What's it doing (or not)? -- Lola J. Lee Beno - ColdFusion Programmer/Web Designer for Hire http://www.lolajl.net/resume | Blog at http://www.lolajl.net/blog/ In rivers, the water that you touch is the last of what has passed and the first of that which comes; so with present time. - Leonardo da Vinci (1452-1519)
Re: Strange problem with @INC
On 平成 19/01/10, at 9:25, Jesse Engel wrote: hmm, no, just the default /usr/bin/perl. i've thought about installing 5.8.8, but haven't yet. i changed my shell to bash (doesn't everyone?) in both xterm and apple_terminal and i did make a .bashrc in which i changed the default value of $PATH to this: PATH=$PATH:/sw/bin:/usr/X11R6/bin:/usr/local/bin:/usr/local/bin:/ sw/share/doc/man Some reason for having a man directory in your executable path? Would you perhaps prefer to set a manpath (man man)? Or maybe I'm just busybodying, in which case just ignore me.
Re: Encode-JIS2K-0.02 problem
On 2007/01/03, at 23:52, Nobumi Iyanaga wrote: Hello, I downloaded and installed Encode-JIS2K-0.02. Install log says that all tests were successful. But when I do this: #!/usr/bin/perl use strict; use warnings; use Encode::JIS2K; use Encode qw/encode decode/; my $infile = some_shiftjisx0123.txt; undef $/; open (IN, $infile); $_ = IN; close (IN); binmode (STDOUT, :utf8); $_ = decode (shiftjisx0123, $_); print; I get this error message: untitled text 4:21: Unknown encoding 'shiftjisx0123' Is that a typo? What am I doing wrong...?? Maybe 0123 should be 2013? (I've never seen the version number for jis tagged on the end, but ...) --- And -- if I can solve this problem, I would like to find out from text files in shiftjisx0123 characters which belong only to JIS X 0213, not to JIS X 0212. Is this possible...?? I'm sure it's possible, either by making something like an isprint boolean table for each entire character set, or be slurping the file and scanning it in parallel from memory. I think it should even be possible to open two read-only streams on the same file, read characters out, and throw some message when the one doesn't match the other. Don't know if there are any shortcut tools for it. Thank you very much in advance. Best regards, Nobumi Iyanaga Tokyo, Japan
Re: Module Aspell::Text does not install
sw/ is specifically choosen as _not_ a common location, so that it doesn't interfere with anything you might have put in the common locations. (Or that Apple might put there.) Just for the record, I've never seen anything from Apple put under / usr/local . My observation is that most software is set up by default to install under /usr/local if you get it from the original source, but if you get it downstream, it will tend to get installed elsewhere. If your distro has official distro package versions (customized for the distro), those will tend to go under /usr . In the case of fink, they are neither official distro (Apple) nor the original source, and the packages are almost invariably customized, so it would make sense to put them under a third tree, thus, /sw . Mind you, I don't use fink, I generally tend to go to the source. I'm kind of weird that way. (And then I have to bug Sherm to explain why things don't work for me. Heh.)
Re: Bundle::XML and expat problems
On 2006/11/26, at 0:00, Sherm Pendley wrote: On Nov 25, 2006, at 2:28 AM, Joel Rees wrote: Expat must be installed prior to building XML::Parser and I can't find it in the standard library directories. You can download expat from: Can anyone tell me why cpan can't find the expat libraries in /usr/ local/lib? /usr/local/lib is not in the default search path for Mac OS X. Hmm. That explains this: --- beast:/usr/local/share family$ /usr/local/bin/perl -e 'for (@INC) { print $_ . \n }' /usr/local/lib/perl5/5.8.8/darwin-2level /usr/local/lib/perl5/5.8.8 /usr/local/lib/perl5/site_perl/5.8.8/darwin-2level /usr/local/lib/perl5/site_perl/5.8.8 /usr/local/lib/perl5/site_perl . or what variable I must have walked on to make perl forget that it was installed under /usr/local ? Did you add -L/usr/local/lib to the default linker flags when you built your Perl? I was assuming that, since it would default to linking the executables under /usr/local/bin that putting /usr/local/lib in the search path would just be common sense. But I see from beast:/usr/local/share family$ perl -e 'for (@INC) { print $_ . \n }' /System/Library/Perl/5.8.6/darwin-thread-multi-2level /System/Library/Perl/5.8.6 /Library/Perl/5.8.6/darwin-thread-multi-2level /Library/Perl/5.8.6 /Library/Perl /Network/Library/Perl/5.8.6/darwin-thread-multi-2level /Network/Library/Perl/5.8.6 /Network/Library/Perl /System/Library/Perl/Extras/5.8.6/darwin-thread-multi-2level /System/Library/Perl/Extras/5.8.6 /Library/Perl/5.8.1 . that /usr/lib isn't in the stock perl's library path either. Okay, so what's the usual thing to do with stuff like expat? Build expat to install under site_perl/5.8.8/ (or site_perl/5.8.8/expat?) or maybe link the already installed libraries in there? Or build everything that depends on it by hand? I hate to admit it, for all the behaving like I know what I'm doing, but I seem to be flying blind here. (Blame it on my wife for not being willing to put up with being a computer widow, maybe, but that wouldn't really be either very fair or very accurate. I'm just lazy or slow or both.) Hmm. I'm going to try linking the expat libraries under, ... what? / usr/local/lib/perl5/site_perl since there's no /usr/local/lib/perl5/ Extras? And which should I link in? Isn't there a way to just tell CPAN to tell the build scripts to add / usr/local/lib to @INC ?
Re: Bundle::XML and expat problems
On 2006/11/24, at 0:54, Sherm Pendley wrote: On Nov 23, 2006, at 8:26 AM, Joel Rees wrote: opossum:~/.cpan/build/XML-Parser-2.34 jmr$ make test PERL_DL_NONLAZY=1 /usr/local/bin/perl -MExtUtils::Command::MM - e test_harness(0, 'blib/lib', 'blib/arch') t/*.t t/astress.FAILED tests 19-20, 24 Failed 3/27 tests, 88.89% okay t/cdata...ok t/declok t/defaulted...ok t/encodingmy variable $p masks earlier declaration in same scope at t/encoding.t line 94. t/encodingok t/external_entok t/fileok t/finish..ok t/namespaces..ok t/paramentok t/partial.ok t/skipok t/stream..ok t/styles..ok Failed Test Stat Wstat Total Fail List of Failed - -- t/astress.t 273 19-20 24 Failed 1/14 test scripts. 3/130 subtests failed. Files=14, Tests=130, 3 wallclock secs ( 1.36 cusr + 0.41 csys = 1.77 CPU) Failed 1/14 test programs. 3/130 subtests failed. make: *** [test_dynamic] Error 255 With that high a success rate, I'd just go ahead with the make install step. Obviously the bug - assuming that there is one, and it's not a problem with the tests themselves - is fairly obscure. Otherwise it would be biting every test. Not exactly what I wanted to hear. I don't like tests that fail, particularly since this is intended to be for CGI or maybe even mod_perl. Probably something wrong in my personality. Hmm. Maybe I'll go put a parallel install of perl in my Linux box and try loading Bundle::XML there to see what happens. Maybe I need to do something else the rest of today.
Re: Bundle::XML and expat problems
Well, you know, my real question here is not so much about Bundle::XML but, ---from cpan attempt to install Bundle::XML-- Removing previously used /Users/jmr/.cpan/build/XML-Parser-2.34 CPAN.pm: Going to build M/MS/MSERGEANT/XML-Parser-2.34.tar.gz Note (probably harmless): No library found for -lexpat Expat must be installed prior to building XML::Parser and I can't find it in the standard library directories. You can download expat from: --- fossile:~ family$ ls -l /usr/local/lib total 1478 -rw-rw-r-- 1 root admin2396 Nov 4 17:01 charset.alias -rwxr-xr-x 1 root admin 338184 Nov 4 22:39 libexpat.1.5.0.dylib lrwxr-xr-x 1 jmr admin 20 Nov 4 22:40 libexpat.1.dylib - libexpat.1.5.0.dylib -rw-r--r-- 1 root admin 410112 Nov 4 22:39 libexpat.a lrwxr-xr-x 1 jmr admin 20 Nov 4 22:40 libexpat.dylib - libexpat.1.5.0.dylib -rwxr-xr-x 1 root admin 811 Nov 4 22:39 libexpat.la drwxrwxr-x 4 root admin1024 Nov 18 08:57 perl5 fossile:~ family$ --- Can anyone tell me why cpan can't find the expat libraries in /usr/ local/lib ? or what variable I must have walked on to make perl forget that it was installed under /usr/local ?
Re: Bundle::XML and expat problems
On 2006/11/23, at 22:20, Joel Rees wrote: Trying to install Bundle::XML through CPAN on my parallel install of perl in /usr/local, I get this stuff that I recall seeing some talk of on the list, but I can't remember where I saved the text from the build sessions. cpan gave me some notices about not being able to find expat's libs, and the Makefile told me to define some variables on the perl command line, PERLEXPATLIB and PERLEXPATINCLUDE or something like that. I tried invoking Makefile.pl directly, as the error messages suggested, gut a compile, and some tests fail and I'm too sleepy to remember any of this. If anyone can read my mind and can tell me what I'm doing wrong before I dig those messages back up tomorrow, I'd sure appreciate it. joel, recognizing that trying to install things in his sleep is one of the things he is doing wrong Okay, here's the snippets from the error messages, if they make things any clearer: ---from cpan attempt to install Bundle::XML-- Removing previously used /Users/jmr/.cpan/build/XML-Parser-2.34 CPAN.pm: Going to build M/MS/MSERGEANT/XML-Parser-2.34.tar.gz Note (probably harmless): No library found for -lexpat Expat must be installed prior to building XML::Parser and I can't find it in the standard library directories. You can download expat from: http://sourceforge.net/projects/expat/ If expat is installed, but in a non-standard directory, then use the following options to Makefile.PL: EXPATLIBPATH=... To set the directory in which to find libexpat EXPATINCPATH=... To set the directory in which to find expat.h For example: perl Makefile.PL EXPATLIBPATH=/home/me/lib EXPATINCPATH=/home/me/ include Note that if you build against a shareable library in a non-standard location you may (on some platforms) also have to set your LD_LIBRARY_PATH environment variable at run time for perl to find the library. Warning: No success on command[/usr/local/bin/perl Makefile.PL] Running make test Make had some problems, won't test Running make install Make had some problems, won't install -from attempt to make by hand-- perl Makefile.PL EXPATLIBPATH=/usr/local/lib EXPATINCPATH=/usr/local/ include Checking if your kit is complete... Looks good Writing Makefile for XML::Parser::Expat Writing Makefile for XML::Parser opossum:~/.cpan/build/XML-Parser-2.34 jmr$ make cp Parser/Encodings/x-sjis-cp932.enc blib/lib/XML/Parser/Encodings/x- sjis-cp932.enc ... opossum:~/.cpan/build/XML-Parser-2.34 jmr$ make test PERL_DL_NONLAZY=1 /usr/local/bin/perl -MExtUtils::Command::MM -e test_harness(0, 'blib/lib', 'blib/arch') t/*.t t/astress.FAILED tests 19-20, 24 Failed 3/27 tests, 88.89% okay t/cdata...ok t/declok t/defaulted...ok t/encodingmy variable $p masks earlier declaration in same scope at t/encoding.t line 94. t/encodingok t/external_entok t/fileok t/finish..ok t/namespaces..ok t/paramentok t/partial.ok t/skipok t/stream..ok t/styles..ok Failed Test Stat Wstat Total Fail List of Failed --- t/astress.t 273 19-20 24 Failed 1/14 test scripts. 3/130 subtests failed. Files=14, Tests=130, 3 wallclock secs ( 1.36 cusr + 0.41 csys = 1.77 CPU) Failed 1/14 test programs. 3/130 subtests failed. make: *** [test_dynamic] Error 255
Re: 5.8.8 builded but...
On 2006/10/29, at 17:02, kurtz le pirate wrote: me too, ok, now, with no sdk options, build perl is ok. #./configure #make ... Everything is up to date. Type 'make test' to run test suite. #make test ... All tests successful. u=3.97 s=3.49 cu=216.67 cs=71.70 scripts=931 tests=117291 # ...but I did not pay attention to the installation prefix and, because i was on my downloaded folder, perl was build here :(( #perl -v This is perl, v5.8.1-RC3 built for darwin-thread-multi-2level ... #/Users/admin/Downloaded/devel/perl/perl-5.8.8/perl -v This is perl, v5.8.8 built for darwin-2level ... shame on me ! any chance to repair that ? Did you do the make install? The make install step moves your executables where you specify. If you don't specify, the default is /usr/local/bin which, in my opinion, is as good a place as any. thanks -- klp
Uh-oh, the C compiler 'cc' doesn't seem to be working.
PPC Mac Mini, Mac OS X 10.4.8, I'm following along in README.macosx and I've done this: export SDK=/Developer/SDKs/MacOSX10.4u.sdk and I call ./Configure -Accflags=-nostdinc -b$SDK/user/include/gcc -B$SDK/usr/ lib/gcc -isystem$SDK/usr/include -F$SDK/System/Library/Frameworks - Aldflags=-W1,-syslibroot,$SDK as suggested. I take defaults until it checks the compiler. Here's the what happens there: -- Use which C compiler? [cc] powerpc-apple-darwin8-gcc-4.0.1: '-b' must come at the start of the command line Uh-oh, the C compiler 'cc' doesn't seem to be working. powerpc-apple-darwin8-gcc-4.0.1: '-b' must come at the start of the command line Uh-oh, the C compiler 'cc' doesn't seem to be working. You need to find a working C compiler. Either (purchase and) install the C compiler supplied by your OS vendor, or for a free C compiler try http://gcc.gnu.org/ I cannot continue any further, aborting. -- The README indicates that the parameters are useful for making perl aware of the SDK, as I understand it, so I'm going to try this without parameters. But I thought I'd go ahead and ask if anyone can confirm that these parameters aren't necessary if I'm not making the parallel install aware of the sdk, or if anyone can give me a hint about what I'm doing. (I've never used such parameters when compiling perl in the past, haven't done an install in a year or so.) There is a possibility I erased part of the SDK when I was intending to move an alias to XCode into the applications folder and forgot that the actual bundle moves when users that can modify the applications folder (admin users) drag and drop wtihout the option key. Also, the notes in README.macosx seem to indicate that shared libraries and threads are now functional. Anyone on the list here using them with (parallel installs of) apache 2 and mod_perl?
Re: Uh-oh, the C compiler 'cc' doesn't seem to be working.
Hi, Sherm, On 2006/10/28, at 21:36, Sherm Pendley wrote: On Oct 28, 2006, at 7:00 AM, Joel Rees wrote: PPC Mac Mini, Mac OS X 10.4.8, I'm following along in README.macosx and I've done this: export SDK=/Developer/SDKs/MacOSX10.4u.sdk Well, first things first. You asked if you *really* need to use the SDK. You need to use an SDK if you're cross-compiling, such as: a. Making a Universal Binary build of Perl, either to distribute or to use for building and distributing UB modules. b. Building Perl to run on a OS version other than the one you're building with. Then since this copy of perl is for apache 2, I don't need it. Thanks. There is a possibility I erased part of the SDK SDKs are separate sub-packages in the Xcode package, so you can re- install them pretty easily, without installing the whole thing. Yeah, I figured re-installing XCode should be sufficient, since the only thing that I moved (and then tried to delete because I thought it was an alias) was XCode. Building perl without the SDK options worked. But I'm still wondering whether I wanted the shared libraries and threads. In fact, looking at perl -V on the system perl and comparing it to the parallel perl, I'm thinking maybe I wanted that multiplicity thing and I probably did want the large files option and what is PERL_IMPLICIT_CONTEXT, and, oh, it looks like there are some advisories I need to check. Especially I'm thinking the shared libraries and threads are going to be useful with mod_perl. thanks more. sherm-- Web Hosting by West Virginians, for West Virginians: http://wv-www.net Cocoa programming in Perl: http://camelbones.sourceforge.net
Re: Uh-oh, the C compiler 'cc' doesn't seem to be working.
On 2006/10/28, at 23:08, Tommy Nordgren wrote: On 28 okt 2006, at 13.00, Joel Rees wrote: PPC Mac Mini, Mac OS X 10.4.8, I'm following along in README.macosx and I've done this: export SDK=/Developer/SDKs/MacOSX10.4u.sdk and I call ./Configure -Accflags=-nostdinc -b$SDK/user/include/gcc -B$SDK/ usr/lib/gcc -isystem$SDK/usr/include -F$SDK/System/Library/ Frameworks -Aldflags=-W1,-syslibroot,$SDK as suggested. I take defaults until it checks the compiler. Here's the what happens there: -- Use which C compiler? [cc] powerpc-apple-darwin8-gcc-4.0.1: '-b' must come at the start of the command line This message is clearly indicative. You should try putting the -b option first in -Accflags -Accflags=-b$SDK/user/include/gcc -nostdinc -B$SDK/usr/lib/gcc - isystem$SDK/usr/include -F$SDK/System/Library/Frameworks I though about trying that, but cc --help says the -b option is for the architecture. And I realized that maybe I didn't want to be doing something I'd have to specify the architecture for. Thanks. Uh-oh, the C compiler 'cc' doesn't seem to be working. powerpc-apple-darwin8-gcc-4.0.1: '-b' must come at the start of the command line Uh-oh, the C compiler 'cc' doesn't seem to be working. You need to find a working C compiler. Either (purchase and) install the C compiler supplied by your OS vendor, or for a free C compiler try http://gcc.gnu.org/ I cannot continue any further, aborting. --
CPANPLUS?
The perldoc on CPAN suggests CPANPLUS but doesn't describe how to access it or read the documentation. I tried perldoc CPANPLUS and perldoc cpanplus and perl -MCPANPLUS - e shell, but perl says it can't find any such thing. I guess, if we want to use it we have to load it, that even though the CPAN perldoc seems to strongly urge its use, it is not yet distributed in the source of perl as CPAN is?
Re: CPANPLUS?
On 2006/10/29, at 7:06, Joel Rees wrote: The perldoc on CPAN suggests CPANPLUS but doesn't describe how to access it or read the documentation. I tried perldoc CPANPLUS and perldoc cpanplus and perl -MCPANPLUS - e shell, but perl says it can't find any such thing. I guess, if we want to use it we have to load it, that even though the CPAN perldoc seems to strongly urge its use, it is not yet distributed in the source of perl as CPAN is? So I StheFriendlyW as I should have done first, and I find http://www.perl.com/pub/a/2002/03/26/cpanplus.html http://search.cpan.org/search?query=cpanplus and http://cpanplus.sourceforge.net/ which either auto-redirects or is aliased (don't want to take time to figure out which) to http://cpanplus.dwim.org/ where it says that cpan++ will become part of the core in perl 5.10 under a different, yet to be determined, name. So, yeah, if I decide to use it, I guess I have to either download and install it by hand or install it from CPAN :-o. (Have I asked about this before and forgotten?) I see there is a Bundle::CPANPLUS on CPAN. (Version 0.01? developer level? or just so straightforward there isn't anything to fix?) Sorry about the noise.
Re: Perl, MySQl and Airport
On Sep 27, 2006, at 3:43 AM, brian d foy wrote: In article [EMAIL PROTECTED], Ray Zimmerman [EMAIL PROTECTED] wrote: On Sep 26, 2006, at 12:45 PM, Joseph Alotta wrote: $host = 'localhost'; ... to connect to the MySQL database. When run from your wife's computer, you'll have to change the 'localhost' to the IP address Just use the zero-conf Bonjour stuff. Find your server's name and append .local to it. Look in the Sharing control panel for the right name. $host = 'albook.local'; Ouch. Okay, looking around the 'net for zero-conf answers the old nagging question of why my machine refers to itself as something.local when I have a valid dns name set for it. It makes me a little queasy, since .local was supposed to be reserved for a slightly different administrative purpose, but, on the other hand, it kind of makes sense. (Thanks, Wikipedia.) (And the flame war echoing in my head from a certain final call in 2005 on LLMNR might keep me from getting my sleep tonight, too. Lousy M$lop.) It's not just Mac, either. You can get stuff for the various other unices and even Windows to do this. http://www.apple.com/macosx/features/bonjour/
Re: Upgrading Perl 5.8.8
On Sep 26, 2006, at 10:20 PM, Ray Zimmerman wrote: On Sep 26, 2006, at 8:34 AM, John Delacour wrote: Apple's installation is in /usr/bin. There is no need either to replace it or to use any fink, darwinport etc. Just install it in /usr/local/bin, which is the default anyway. Read the install file. This is what I've been doing for years. Then I replace /usr/bin/ perl with a symlink to /usr/local/bin/perl. This leaves me with a default Perl install whose @INC does not include Apple's libraries, only those in /usr/local/perl-5.8.x. I never noticed any issues with this until recently I had occasion to boot my PowerBook into single-user mode and at one point (I believe it was when shutting down) I saw the following in an error message ... /System/CoreServices/RemoteManagement/ARDAget.app/Contents/ Resources/kickstart line 277 Can't locate Foundation.pm Apparently a script related to Apple Remote Desktop Agent. And I did find Foundation.pm at /System/Library/Perl/Extras/5.8.6/darwin- thread-multi-2level/Foundation.pm (which of course is not in my @INC). So my question is ... what is the best way to make sure my new install (in /usr/local/) has everything the OS expects? Can I just install a few extra CPAN modules and make the OS happy, or do other apps install things in the Library/Perl dirs too? What do the rest of you do? For my part, I leave the symbolic link at /usr/bin/perl as it is. If the system perl needs to be upgraded for some reason, Apple's system update can do it. I keep my hands off the systems perl. To use the parallel install of perl, I just put /usr/local/bin/perl on the shebang, and/or edit the path in the .bash_profile script so that /usr/local/bin comes first in the path.
Re: GnuPG::Interface module on OS X
On 2006/09/20, at 2:45, Dennis Putnam wrote: Although I don't think this is an OS X specific issue I can't find any place to seek help (there seems to be a GnuPG list but it is defunct or inactive). If someone knows of a better resource please let me know. I have installed GnuPG on a Tiger (10.4.7) server and it seems to be working fine. I then installed GnuPG::Interface in perl and wrote a script that tries to decrypt a file. Everything seems to be working fine and the file gets decrypted. My problem occurs when I try to run the script in background (cron or nohup). I get an error pointing to the line that calls the 'decrypt' method. It says fh is not defined. I don't have a variable by that name so I don't have a clue what it is referring to other then it must be in the decrypt method somewhere. I tried setting $gnupg-options-batch (1); but that did not help. Can someone help me figure out what is wrong? Thanks. I can't answer your question, but have you looked at http://www.gnupg.org/documentation/mailing-lists.html You will see there that it suggests the archives at marc.theaimsgroup.com . (Great guys at 10east. Drop them a fiver or even a C spot or so if you have the chance.) A slow scan through the Information Security section will turn up the gnupg-whatever lists. The archives show the lists are active, and marc is searchable.
Re: Perl Module Installation in $HOME
I have never run the CPAN shell as root I beg to disagree ... and I don't see what problems you're referring to. I just do 'sudo cpan' unless, of course, you actually do it as something like sudo -u myuser cpan
Re: install problems: XML::LibXML::XPathContext
I couldn't find any options in 'man ld' to *remove* a directory from the library search path. I hope I simply missed it - it would seem a rather odd omission otherwise. Well, how exactly do you expect to tell ld which libraries to remove from the search path? (Canonicalization, anyone?) IIRC, you use the order in which you add libraries to the search path to override any potential library references you don't want.
Re: How to get a pid
One way of seeing that the grandfather process can terminate grandchild processes is to have the child process catch the signal and kill, in turn, its own child processes, as part of its clean-up code before it dies. This means that you have to use a signal that can be recovered from when killing the child process.
Re: How to get a pid
On 2006/07/29, at 4:12, Ted Zeng wrote: ... I have used pid+1 for quite a few days now and it seems to work without any problem. But I still feel it is not the right thing to do. Have you ever heard of race?
Re: Writing utf 8 files
If you open the file as utf-8 you will see ö and if you open it as MacRoman you will see √∂. You could also open it as Traditional Chinese or Simplified Chinese or many other things and see other things. UTF-8 byte order is always the same, so there is no need for a BOM, though some editors might use it as a hint. Given that his editor seems to have interpreted the file as utf-8 with the BOM in place and as something else without the BOM, we might guess that his editor recognizes the BOM. We could also, of course, guess that his login account is set to default to something other than utf-8, which is also in keeping with my experience with Mac OS X when the user has not deliberately messed around with things.
Re: Odd 'head' problem
I don't recall those questions at all, however it is not at all obvious that 'HEAD' is going to replace 'head'. I'm not sure I understand the earlier comment about case insensitive filesystems. Certainly, OS X is not case insensitive at the CLI level, although 'Finder' is. That's complete nonsense, as the simplest test will show: ~ sherm$ echo Hello world head ~ sherm$ cat HEAD Hello world HFS+ is a case-insensitive file system. Finder has nothing whatsoever to do with it - it's just a user-level file manager. base:~ me$ echo hello Mac OS X mailing list head base:~ me $ cat HEAD cat: HEAD: No such file or directory base:~ me $ cat head hello Mac OS X mailing list base:~ me $ ;-) Yes, my boot volume is HFS+, and I have not moved the user directories off of it on this machine. Explanation: Case sensitivity is a property of the file system, which is separate from the shells (both CLI and GUI). Current versions of Mac OS X (from at least 10.2) allow you to specify case sensitivity on both UFS and HFS+ volumes when you partition a drive. I always format all my volumes case sensitive, except for the volume I keep classic on. I'm not sure if it's possible to change the case sensitivity when re- formatting existing partitions, and it would take more time than I want to take right now to check. As a side note, I prefer to put /usr/local, /www, and similar stuff on separate volumes formatted UFS as much as possible. Those are also case sensitive, of course. An even further off-topic complaint, it would be nice to be able to make an even finer cut, and put /var/log, /tmp, /var/tmp, etc. on separate partitions as damage-limiting measures, but, one, I run out of partitions when I do things like dual-boot openbsd, and, two, I won't trust Mac OS X's handling of hard or soft links to that level until /etc/fstab is respected before autodiskmount or whatever it's called kicks in. I've got swap on a separate partition on one of my old machines with limited hard disk space, and it definitely speeds that old machine up, but I don't recommend playing those tricks on a machine that I want to load arbitrary applications on. (Any Apple people reading the list, please note that there are good reasons for allowing /etc/fstab to do its job.)
Re: Waiting until Acrobat closes file
Will it work to: a) Wait until Acrobat Reader is running b) Sleep long enough ... Instead of depending on timing, which will always get messed up when you can least afford it, get a list of the open files from the system. man lsof There should be better ways to get the file handle, but you should at least be able to look through the list of open files given by lsof for an instance of your file opened by the process whose id is the acrobat reader you asked to open it.
Re: Spreadsheet::WriteExcel and ODBC
Up front caveat, I don't have a lot of experience with this kind of thing, but, ... On 2006.6.8, at 02:36 AM, Mike Schienle wrote: Celeste Suliin Burris wrote: I'm a bit confused as to why you need to use ODBC. I just connect to the remote MySQL server via the DBI when I'm using Perl. I have my Linux machine running MySQL 5.0. Hi Celeste - I'm working on a program that will update a database throughout the day and provide the customer with a spreadsheet of the results. I'd rather have the customer be able to open the spreadsheet any time and get the latest data rather than having to create a new spreadsheet each time the data updates. I should be able to accomplish that by putting the ODBC connections directly in the spreadsheet. Why? Would it make more sense, perchance, to use a web browser as your front-end instead of MSOffice? I have no problem doing the DBI connection from Perl, which is what I've been doing for quite some time. What I'm after is a way to do the connection from an Excel spreadsheet that has been written by WriteExcel. How often does this spreadsheet need to be (mechanically, I assume) rebuilt? (That's the only reason I can think of for building such a spreadsheet from a perl script.) The program is for multiple customers, so I want/need to be able to write a spreadsheet that is for the particular customer. I might be able to do this as an Excel template, also. That would be Plan B or C. Oh. That's another reason, I suppose. My guess is you're going to be flying by the seat of your pants on this project with no radio. I think I'd try to sell the customers on pushing the interface to MSOffice back a ways, doing the tables in HTML on a local-access-only server, and only dumping the relatively static results to Excel at the stage where things go to the archive. (Nursing the customers off of MSOffice as an archive format is also something I'd recommend, but one thing at a time.) From: Mike Schienle [EMAIL PROTECTED] Organization: Custom Visuals, LLC Date: Wed, 07 Jun 2006 16:51:36 +0100 To: macosx@perl.org Subject: Spreadsheet::WriteExcel and ODBC Hi all - I need to write some Excel files with ODBC access to a MySQL database. I've used Spreadsheet::WriteExcel in the past for writing formulas, formatting, etc., with no problems, but the ODBC connection is new to me. I tried it manually (just opening a blank spreadsheet and connecting to a remote MySQL server) last night using Actual's ODBC client demo and that worked fine. The WriteExcel docs are a bit vague [to me] on connecting to an ODBC source, though. Has anyone done this via Spreadsheet::WriteExcel? Do you mind passing along a couple hints? Would this require any customer/client to have an ODBC client on their system for this to work? If so, I may just go with static data and have the customer pick up a new spreadsheet each time. Also, as mentioned above I'm using Actual's ODBC client. Is there a better client out there? Or an Open Source one that's competitive? Thanks. -- Mike Schienle -- Mike Schienle
Re: file creator id, etc
I am trying to read a CSV data file of names and addresses into Now Contact. However the import feature does not see this file as it is ghosted. My conclusion is that it is looking at the file creator information. How do I see this information? Apple-i, Get Info, does not show this. Not a perl topic, but isn't there a Finder setting that determines whether Get Info allows access to this or not?
Re: Waiting until Acrobat closes file
if instead you're doing something like ... system('open', '/Applications/Acrobat.app'); then you'll need to: wait around until Acrobat appears in the process table; wait around until that PID disappears; Really?? In my experience, the `open` command immediately returns control to the controlling process (the shell, or whatever else invoked it (pine etc)) without waiting for the `open`ed application to finish, or for that matter even to finish launching. If you're going to use acroread, then [a] you have to install it, and [b] you have to view the document in X11. Yuck. Surely that isn't really the best way to approach this, is it? I'd have thought that the `open` command was the perfect answer to this question... system('open', '/Applications/Preview.app'); Well, I think the orginal problem was waiting until the viewer closed the document to delete the file. So, whichever viewer you use (X11 acrobat reader, Mac OS X acrobat reader, preview app, something else) you're still stuck with the problem of how to avoid deleting a file out from under the viewer. Except, of course, this is Unix, and we have i-nodes, and the system knows how to hold onto a file until all links are gone, as I recall. Ergo, just delete it once you know the viewer has it open, IIRC.
Re: RAM disk options for perl cron job
On 2006.5.13, at 01:20 PM, Joseph Alotta wrote: I have my personal web site on my old clamshell iBook, and it runs a dynamic DNS client every ten minutes via cron. That basically keeps the disk spinning constantly. Burned out a drive last year, and I'm worried it will burn out a drive this year. So I'm thinking of putting the client on a RAM disk, although, since I wrote the client in perl, I suspect that I'd then have to copy perl itself to the RAM disk as well. RAM disks are so cheap now. I saw a 64MB USB on google for $8.97. Tht's a flash RAM devive, not a RAM disk. Different thing. Hi Chris, Why wouldn't it work to put the client code and perl on the USB keydrive and then every ten minutes, your system will get it from there instead of from your hard drive? I realize the USB keydrive is slower to load, but does that matter here? Definitely a thought. I only write the log and the file that keeps the actual when the IP actually gets updated, so that wouldn't mess with anything. Unfortunately, this is one of the models that only has one USB port and no firewire, so the USB port is also where I hang the drive I back up to. If I'm going to buy a USB hub for this rig, I think I'd rather splurge and pick up a full complement of RAM. Powering the USB hub would add another wall wart, three more physical points of favor. Japanese apartments at rent I can afford do not offer much space that is protected, so I'm not just being paranoid. On the other hand, personal web servers can go off line for long enough to slip a hub in and then off line again to slip the hub out when the backup is done, without the world coming to a stop, Definitely a thought.
Re: Storable problem on Intel Mac Mini
On 2006.5.12, at 08:54 PM, Mike Schienle wrote: On Fri, May 12, 2006 7:40 am, Mike Schienle wrote: On Fri, May 12, 2006 7:05 am, Joel Rees wrote: On 2006.5.12, at 10:01 AM, Mike Schienle wrote: Hi all - I just installed an Intel Mac Mini as a replacement for a dual 1.8 GHz G5 at my colocation place a couple days ago. Can I ask a silly question in public, or would off-list be more appropriate? I thought that would raise an eyebrow :-) Heh. Some trolls, at any rate, are not evil. The G5 began having stability problems. It would stop authenticating users (email, ftp, logins, etc. would fail, but web server would continue) after anywhere from 2 to 24 hours. My instant reaction to that would have been putting a stripped-down whitebox running OpenBSD as a logging firewall between the G5 and the 'net, to check for attacks on the mail and ftp subsystems. Attempts to install a trojan for the wrong processor might have been causing DOS on the attacked services? (I should check whether OpenBSD has drivers for IP over firewire or for the USB to ethernet converters. If so, a Mini might do as well as a whitebox with two NICs.) I have two internal disks, 80 GB for OS and 250 GB for DB and web sites, and two external disks that are essentially backups for the internals (all 7200 RPM). I ran disk checks on all of them. I cloned internals to externals one at a time and had the exact same issues. After many attempts to hunt down the root cause (from launchd to securityd and a few other areas), it was time for stabs in the dark. I replaced the OS on the external, which was an upgrade from 10.3.x to a fresh install of 10.4. That actually made things worse, it went from failing to authenticate to complete lockups within a few minutes, so back to the original setup thanks to being able to shuffle things around with the spare disks. Could be an unrelated problem? The next attempt was to start swapping/rotating RAM modules (I had recently gone from 1.25 GB to 4 GB), to see if one was flaky. No change. The current guess is a problem with the power supply. At this point I just needed to put something stable in place and fix the problem behind the scenes. As soon as that happens I expect to put the G5 back. And here I was thinking that someone in your organization had requested the G5 for the art department. ;:-/ Another swag might be the CPU or the heat sink? I'm definitely happy with the Intel (dual core) Mac Mini so far. Database access is about 15% slower for a couple long queries (20+ minutes), which I'm assuming is because they went from a FW800 attachment to FW400, though it might be because of the internal Mac Mini disk being a 5400 RPM laptop drive. The mysql executable is on the internal drive, but the data lives on the external drive. Yeah, the notebook grade disk will slow the thing down, about 15% would not be unexpected. RAM reduction will tend to have more drastic effects when it has effects. I'm not familiar enough with FW400 to hazard a guess, although I have a gut feeling the difference is not that big unless there's constant bulk (100MB) raw data motion. 20 minutes is a long query, indeed. I'd be tempted, once you have the G5 back up, to keep the Mini on-line and run the DB on the one and the web server on the other. Actually, the data was on the internal 250 GB disk on the G5, rather than the external FW800 drive. Now it's on the external 250 GB disk attached via FW400. The query uses about 30 fields from 8 joined tables to pull out 200 KB from 19 GB. Mike Schienle One reason I was interested, if you were planning to keep the Mini on line, I'd be interested in what strategies you have for wear and tear, my experience being that notebook-grade disks running full-time tend to burn out after about a year. I have my personal web site on my old clamshell iBook, and it runs a dynamic DNS client every ten minutes via cron. That basically keeps the disk spinning constantly. Burned out a drive last year, and I'm worried it will burn out a drive this year. So I'm thinking of putting the client on a RAM disk, although, since I wrote the client in perl, I suspect that I'd then have to copy perl itself to the RAM disk as well. Another thought would be, since I really don't need it to run exactly at a specific time, to simply keep the client process running in a sleep loop.
Re: PerlTK's -clipboardAppend in OSX...?
On 2006.5.6, at 07:07 AM, Michael wrote: There are lots of solutions here, but none of them are built into TK, nor can they be. If you want your program to interface with an environment other than the one TK is running ni, it's up to you to make the links. I believe you are asserting here that, PerlTK can not (or should not) concern itself with individual platform issues to which it is ported. I respectfully do not agree. So, do you have some code to start the subproject with? Are you volunteering to lead it? Or maybe to administer the mail list if someone else leads the project? All that aside, the current status is as Jay described, and if you are in a hurry to get something running, CamelBones is available now. Tcl/Tk is also available in a package tuned for Aqua, but I do not know how well the paste buffers are integrated there. But that's also for another mail list. [...]
Re: OT: WebDav won't allow put...
What this has to do with perl on Mac OS X, I'm not sure, but I decided I want to muck around with webdav on my Mac 10.4.6 client Apache 1.3x stock install. I enabled the loading of the mod_dav module in the httpd.conf and added: DAVLockDB /Library/WebServer/DAVlock Directory /Library/WebServer/Documents/webdav DAV On AuthType basic AuthName WEBDAV AuthUserFile /etc/httpd/passwd Require valid-user /Directory The lock file has rw perms for www/www I can authenticate and access the webdav page and from my linux box do ls and gets but the PUT operation always fails. If I do a connect to server locally with the finder Connect to server... any attempt to drag a file to the webdav window fails with a bizarre message about file name possibly too long or invalid characters. Any PUT operation leaves the following in error_log [Thu Apr 13 23:34:46 2006] [notice] Apache/1.3.33 (Darwin) DAV/1.0.3 configured -- resuming normal operations [Thu Apr 13 23:34:46 2006] [notice] Accept mutex: flock (Default: flock) [Thu Apr 13 23:37:20 2006] [error] [client 192.168.1.70] The locks could not be queried for verification against a possible If: header. [500, #0] [Thu Apr 13 23:37:20 2006] [error] [client 192.168.1.70] Could not open the lock database. [500, #400] [Thu Apr 13 23:37:20 2006] [error] [client 192.168.1.70] (13)Permission denied: I/O error occurred. [500, #1] Judging from the lack of info out there I am either the first person to try this or the most unlucky... Nobody really seems to care in our world. SMB or AppleTalk over IP does well enough, I think Has anyone been successful in getting the webdav stuff to work with the stock apache? I have a vague memory of getting it to work on my last job, but I don't remember how. I do remember playing lots of strange games with the parameters until I found something that clicked. Didn't end up using it, so when I wiped the box I didn't think to record the configurations. Sorry.
Re: When does a hash of lists get defined?
'defined' will autovivify, 'exists' will not. I'll leave it up to Doug to decide if knowing that helps. The Camel book, page 710 in the third edition is very clear that exists goes the same way as defined. But perl has gone through a couple of new versions since it was written. Thanks to Stewart who did get me on the right track by mentioning exists. That's where I found autovivification mentioned in the Bible. It also says there: This behavior is likely to be fixed in a future release. In 5.8.1-RC3 it hasn't changed. I wonder if the comment refers to perl6 ? Do those fellows know about it? I'm going to shoot myself in the foot before I check the docs, but I look at the look-ahead and look-behind perl has to take with if ( funxyz( $a_hash{ D } ) and if ( funxyz( $a_hash_ref{ D }[ $x ] ) and I think I don't want perl trying to differentiate what exists() and defined() do in any but the most simple case. I think the English semantics of defined vs. exists is plenty to deal with, anyway. The solution for my problem was to test just the hash element without looking at the underlying list item. Exists and defined both work that way without behaving like the Creator. Checking for the list item = also works if the vivifivation has occurred. if ( defined( $a_hash_ref ) $defined( $a_hash_ref{ D } ) defined( $a_hash_ref{ D }[ $x ] ) ) although we usually know which of those tests we can leave out if we stop to think.
Re: problems with intel architecture
Are there OS functions that rely on perl? What sorts of things? Yes. Not many, though. You can see what's there if you type $ locate *.pl in a terminal window. That will only show the files ending in .pl. Scripts use the #! line to determine the interpreter to run them with, not the filename extension. I was thinking, let's write a script to check the first lines. But I'm lazy. file /*bin/* | grep perl file /usr/*bin/* | grep perl gets everything in the usual places for system executables. Of course it misses utility scripts in odd places, including all those found by the locate *.pl command. Since I'm a little weak with one-liners and with File::Find, I should try to work up a one-liner that would do a recursive descent, and log out and back in as a user that can sudo so I can descend all the places my working user can't. But I'm lazy. ;-/
Re: problems with intel architecture
Not a one-liner and not even pretty, but since I needed the practice: - #! /usr/bin/perl use File::Find; @l = ( / ); sub w { if ( -d $_ ) { my $dir = $File::Find::dir; if ( system( file * | grep perl ) == 0 ) { print *** from: $dir ***\n; } } } find( \w, @l ); - Overkill. Gets a lot of can't cd errors, of course, since I'm not running it as root, and it takes a while to run, too. Finds a bit more than scripts, too, so it really doesn't serve the original question, Heh. A bunch of people wrote Are there OS functions that rely on perl? What sorts of things? Yes. Not many, though. You can see what's there if you type $ locate *.pl in a terminal window. That will only show the files ending in .pl. Scripts use the #! line to determine the interpreter to run them with, not the filename extension. I was thinking, let's write a script to check the first lines. But I'm lazy. file /*bin/* | grep perl file /usr/*bin/* | grep perl gets everything in the usual places for system executables. Of course it misses utility scripts in odd places, including all those found by the locate *.pl command. Since I'm a little weak with one-liners and with File::Find, I should try to work up a one-liner that would do a recursive descent, and log out and back in as a user that can sudo so I can descend all the places my working user can't. But I'm lazy. ;-/
Re: When does a hash of lists get defined?
On 2006.4.5, at 09:36 AM, Doug McNutt wrote: While messing with CGI POSTed data I got trapped by this one. Version 5.8.1-RC3 for Mac OS 10.3.9 It appears that the hash element D gets defined in the process of testing to see if an element in the associated string is defined. The last if below takes the else route. Is that normal? Does it somehow make sense? %phash = (); foreach $jill (A, B, C) { for ($lynn = 0; $lynn3; $lynn++) { $phash{$jill}[$lynn] = $jill$lynn; print \$phash{$jill}[$lynn] = $phash{$jill}[$lynn]\n; } } if (! defined $phash{D}) { print \$phash{D} is undefined, We expected that.\n; } I should run the code before hazarding guesses, but I'm guessing you find it is not defined above, if (! defined $phash{D}[3]) { print \$phash{D}[3] is undefined. We expected that too.\n; } ... but defined after this one is done, So that the above two would show is undefined, but the next would show got defined. The reason I would guess this is that perl does automagically define things in a lot of cases where other languages (like java) would throw fits, I mean, throw exceptions. Perl is designed from the point of view that the programmer thought he knew what he was doing, which, from a purely mathematical point of view, is dangerous, but in the real world is often the desired path. Leaving the deep philosophy aside, if $phash{D) is not defined, there are three or four ways to parse $phash{D}[3]. The java way would be (I think) to throw the conniption, I mean, exception: Can't access an array off a null pointer. Another way might be to short-circuit the test, if $phash{D} is not defined, no way can anything referenced off it be defined. But that path requires intelligence which we have not yet been able to give programming languages and expect them to completely parse any program in anything approaching determinant time. {Oh. Wait. One more path I gotta check before I call this either valid or invalid. Oh, wait, ...} So perl simply defines the thing so that the rest of the test can complete, as I understand it. if (! defined $phash{D}) { print \$phash{D} is undefined\n; } else { print \$phash{D} got defined - why?\n; } __END__ -- Applescript syntax is like English spelling: Roughly, but not thoroughly, thought through. Yeah, and Applescript does a bit more than perl on the trying to read the programmer's mind, but it was a really bizarre programmer's mind it was trained, I mean programmed to read. 8-*
File:Find-ing perl stuff on a mac (Re: problems with intel architecture)
Hmm. On 2006.4.5, at 08:48 AM, Joel Rees wrote: Not a one-liner and not even pretty, but since I needed the practice: - #! /usr/bin/perl use File::Find; @l = ( / ); sub w { if ( -d $_ ) { my $dir = $File::Find::dir; if ( system( file * | grep perl ) == 0 ) { print *** from: $dir ***\n; } } } find( \w, @l ); - After about two hours of running (on an old clamshell iBook) and dredging up some really interesting stuff, it seems to have completed without descending into /usr/bin. Anyone have any ideas why?
Re: problems with intel architecture
On 2006.4.4, at 07:37 AM, Cheryl Chase wrote: On Apr 2, 2006, at 3:32 PM, Edward Moy wrote: A native intel program can't load a ppc binary (like Expat.bundle). Similarly, a ppc program running in Rosetta can't load an intel binary. In the native or Rosetta environments, there can be no mixing of binaries. You should probably move aside (or remove) the stuff in /Library/Perl/5.8.6 (leaving the AppendToPath file), or at least the ones that have .bundle files. Then you'll have to reinstall those CPAN modules. Thanks Ed. That helped get me going. For others who may be investigating the same problem, let me add that CPAN was not totally successful at rebuilding some modules, for reasons I don't completely understand, but which probably included needing some stuff like LWP and HTTP in order to fetch things. But I was able to download them manually from http://www.cpan.org, move them to /var/root/.cpan/build, and manually build and install them. Even though it's not as necessary as it was when the system perl was at v5.6 and we all wanted the Unicode stuff in v5.8, I'm still inclined to build a separate install of perl for application use. That way I don't have to worry as much about fine-tuning what gets installed, and I find it's easier to get cpan to behave, as well. (Not that you can ever ignore what gets installed, but it's easier to protect the server if you keep the OS clean.)
Re: openning file...
On 2006.4.2, at 10:34 PM, kurtz le pirate wrote: In article [EMAIL PROTECTED], [EMAIL PROTECTED] (Sherm Pendley) wrote: The wanted function only gets the file name of the file, which is not enough to open the file with, if it's in a subdirectory. Try calling open() with the full path to the file, not just the file name alone. sory but you're wrong. File::Find doc say : ...you are chdir()'d to$File::Find::dir when the function is called,unless no_chdir was specified. thanks -- klp mind-boggling
Re: Problems installing PDL
On 2006.1.12, at 06:57 AM, John Delacour wrote: (for Joel Rees' instruction ... is an ellipsis ) So, you have perl interpreting your httpd.conf, or you elided the actual path in the sections you posted?
Re: Perl web server
On 2006.1.11, at 10:30 AM, John Delacour wrote: # /usr/local/apache2/cgi-bin should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # #Directory /usr/local/apache2/cgi-bin Directory /users/jd/sites/cgi-bin This one I think I understand. (I assume your file system is not set up case sensitive?) Except AllowOverride None Options None This Order allow,deny Allow from all and this look kind of weird together, in reference to the directory name. Am I missing something? /Directory Directory /users/jd/sites/.../.../.../cgi-bin That path looks kind of unusual. I don't often see directories named Is it something Apache does that I've either never seen before or forgotten? AllowOverride None Options None ExecCGI Order allow,deny Allow from all /Directory
Re: @INC
On 2006.1.6, at 05:41 AM, The Ghost wrote: The other perl was installed by darwinports. It doesn't ask questions. If it did, I wouldn't install it. I'm not concerned about incompatibilities as the perl versions are so close. I could reconfigure darwinports, but I don't want to. So this isn't a solution to my issue. I suggest you configure your Darwinports anyway. Thanks though. Ryan On Jan 5, 2006, at 2:01 PM, John Delacour wrote: At 12:15 pm -0600 5/1/06, The Ghost wrote: ...I have 2 versions of perl installed and only use one of them. The reason for 2 versions is a port system that refuses to rely on the already installed perl. So I have: /System/Library/Perl/5.8.6/darwin-thread-multi-2level AND /opt/local/lib/perl5/5.8.7/darwin-2level /opt/local/lib/perl5/5.8.7 ... If you download 5.8.7 and let it install itself in the default location without bothering even to look at the difficult questions, I guess you will solve all you problems and end up with this: @INC: /usr/local/lib/perl5/5.8.7/darwin-2level /usr/local/lib/perl5/5.8.7 /usr/local/lib/perl5/site_perl/5.8.7/darwin-2level /usr/local/lib/perl5/site_perl/5.8.7 /usr/local/lib/perl5/site_perl/5.8.6/darwin-2level /usr/local/lib/perl5/site_perl/5.8.6 /usr/local/lib/perl5/site_perl Whatever you have in /opt or whatever other non-standard directory can then porbably be safely consigned to the trash. JD
Re: CPAN modules not included with OS X
On 2005.12.31, at 02:01 AM, Joseph Alotta wrote: On Dec 29, 2005, at 8:06 PM, Joel Rees wrote: Maybe it would help to tell you it ain't that simple? To mention openssl again, it can be installed in a variety of places, and it depends in part on where other things you may have installed might have wanted to put the packages they depend on. That's another reason for using a sandbox. (Using the separate perl also helps me avoid building a sandbox for my personal server, where I don't have resources for doing things the ideal way.) Hi Joel, What's a sandbox? A place where you can play with impunity. grin / If you start with the live chicken (to recklessly mix metaphors), you replicate your server machine/cluster on the internal net, where only the dev, test, evaluation, etc., crew can access it. This is the sandbox. You fix bugs and add functionality on the sandbox, then when you've tested the sandbox sufficiently, you take a backup of the sandbox for both archival and to use as the base of the next version, and mirror the sandbox back to the live server/cluster. If you start with the egg, you set up the sandbox before you set up anything live on the external network, and the initial server/cluster is basically built by taking a backup of the sandbox and restoring the backup to the hardware that will be the live system. One of the things that having the sandbox helps with is that you can take a diff of the sandbox and the current base system and use the diff to figure out what doesn't need to be copied when mirroring back to the live system. (The diff also helps with security analysis of the new version.) Using jails, virtual servers, separate installs of perl and other components, careful partitioning and the like, you can often put the sandbox on workstations, but only if you are willing to trust the employees whose workstations you use.
Re: CPAN modules not included with OS X
Get used to CPAN. You aren't going to find a vendor that provides a full CPAN install -- new ones appear daily, so keeping up is impossible anyway. Hm. I really do not want to install the Dev Tools on my Mac OS X Server boxes. Why not? I'm not suggesting you install the dev tools, but if your goal is to reduce the profile available to cracking, you should not be wanting a full CPAN install anyway. A full CPAN install would be in many ways like having Dev Tools installed, and in fact would not be very meaningful without the Dev Tools. I have been getting around this by installing the files on a client machine and coping them to the servers, but I don't believe this is ideal. That actually is the ideal, after a manner of speaking. Or it could be. You need a backup and you need a sandbox to test things you want to change before you change them. The installed server should be a mirror of the sandbox, except for the databases. The sandbox can be kept on a hard disk that is kept off-line during normal operations, freeing the machine that actually runs the sandbox to be used as a normal administrator's dev box. Does anyone know what problems I could be causing? Only your hairdresser knows for sure. ;-)
Re: CPAN modules ...
[CP_ERROR] [Mon Dec 26 14:07:55 2005] Fetching of 'ftp://ftp.cpan.org/pub/CPAN/authors/id/G/GA/GAAS/CHECKSUMS' failed: Command failed: [...] This hand installation usually works, but it would be very convenient if I could make CPANPLUS ar CPAN work. Any suggestions? Choose a less busy mirror?
Re: CPAN modules not included with OS X
On 2005.12.30, at 10:03 AM, James Reynolds wrote: Grumble. That is exactly what I wanted to know! Thanks! Does CPAN install C libraries to /usr/local/lib or somewhere else? Maybe it would help to tell you it ain't that simple? To mention openssl again, it can be installed in a variety of places, and it depends in part on where other things you may have installed might have wanted to put the packages they depend on. That's another reason for using a sandbox. It reduces the number of places you have to look for things to copy, and, more importantly, reduces the necessity of trying to determine what not to copy. For what it's worth, I don't use the perl interpreter installed by Apple to do my server stuff with. I probably could if I wanted to learn an awful lot about how it's set up, but I find it easier to leave the system alone and install a separate perl for the server, use the #! line to point to the one to use, and set the environment variables appropriately in the users I do my dev work under so the shell I'm using finds the right perldoc for my login user. (Using the separate perl also helps me avoid building a sandbox for my personal server, where I don't have resources for doing things the ideal way.)
Re: incantation for uploading shift-jis files
Do you know of a way to tell perl, or, rather, the CGI module to open the file handle as shift-JIS? open F, :encoding(shift_jis), $f Okay, okay, I see I have shown my laziness. However, I have just dug into chapter 8 of the Cookbook (Second edition) and found a couple of relevant bits, and use open IO = :raw :encoding(Shift_JIS); at the top of the script, before the use encoding( 'Shift_JIS' ); and binmode( $fh, :raw :encoding(Shift_JIS) ); immediately after grabbing the file handle with $fh = $query-upload( 'file-to-send' ) do not help. They do seem to alter the error messages in apache's error_log. So, here's the error messages, and the output where I should be getting 空欄を埋めたり、完全な文書で質問に答えたり、一番適切に思う解答を〇で記したりする。 Error messages with neither of the above attempts to push the io layers around: ---errors-- Wide character in print at (eval 10) line 85. \x{fffd} does not map to shiftjis at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 281, fh1Sample_questions.text line 38. - with the second line repeated six times. ---output-- 遨コ谺\x{fffd}繧貞沂繧√◆繧翫\x{fffd}∝ョ悟\x{fffd}ィ縺ェ譁\x{fffd}譖ク縺ァ雉ェ蝠上↓遲斐∴縺溘j縲∽ク \x{fffd}逡ェ驕ゥ蛻\x{fffd}縺ォ諤昴≧隗」遲斐r縲\x{fffd}縺ァ險倥@縺溘j縺吶k縲 - With the 'use open IO = :raw :encoding(Shift_JIS);' uncommented, the result: ---errors-- Wide character in print at (eval 11) line 85. \x{fffd} does not map to shiftjis at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 281, fh1Sample_questions.text line 38. - again, with the second line repeated six times. ---output-- 遨コ谺\x{fffd}繧貞沂繧√◆繧翫\x{fffd}∝ョ悟\x{fffd}ィ縺ェ譁\x{fffd}譖ク縺ァ雉ェ蝠上↓遲斐∴縺溘j縲∽ク \x{fffd}逡ェ驕ゥ蛻\x{fffd}縺ォ諤昴≧隗」遲斐r縲\x{fffd}縺ァ險倥@縺溘j縺吶k縲 - Almost exactly the same. This is also the results when I use binmode raw on the file handle for the upload. With the use open IO ... commented out and the 'binmode( $fh, :raw :encoding(Shift_JIS) );' uncommented, the results are ---errors-- Wide character in print at (eval 10) line 85. shiftjis \x84 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x80 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x85 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x87 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x80 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x87 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x87 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x82 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. - ---output-- 遨コ谺\x84繧貞沂繧√◆繧翫\x80∝ョ悟\x85ィ縺ェ譁\x87譖ク縺ァ雉ェ蝠上↓遲斐∴縺溘j縲∽ク\x80逡ェ驕ゥ蛻\x87縺ォ諤昴≧ 隗」遲斐r縲\x87縺ァ險倥@縺溘j縺吶k縲\x82 - And, with both uncommented, the results are ---errors-- Wide character in print at (eval 11) line 85. shiftjis \x84 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x80 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x85 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x87 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x80 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x87 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x87 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. shiftjis \x82 does not map to Unicode at /Volumes/web/userweb/joel/cgi/cs/ranbunhyou/withfile2 line 212. - ---output-- 遨コ谺\x84繧貞沂繧√◆繧翫\x80∝ョ悟\x85ィ縺ェ譁\x87譖ク縺ァ雉ェ蝠上↓遲斐∴縺溘j縲∽ク\x80逡ェ驕ゥ蛻\x87縺ォ諤昴≧ 隗」遲斐r縲\x87縺ァ險倥@縺溘j縺吶k縲\x82 - Looks like I'll need to get a hexdump from perl to get a better handle on this, or perhaps try the script on a Linux box, but I've got to get some sleep tonight. Taking the kids to the doc and to piano practice didn't leave much time to work on it today. I'm suspecting the problem lies in the incomplete implementation of locales in Jaguar, or perhaps in my choice of locales when I compiled perl on this box.
Re: incantation for uploading shift-jis files
The script below reduces the problem to its simplest. Notice the deadly caveats. In my experience (and I have war stories too) the harder one tries with Perl/Unicode the worse the mess you get into. You can probably forget about locale -- try “use encoding (:locale)” in the script below and see what you get! -- and lots of other things. It's certainly a jungle, and it's growing, but it's getting tidier. #!/usr/bin/perl # # In BBEdit/TextWrangler set this document's # encoding to Japanese (Shift JIS); always open/reopen # as Japanese (Shift JIS). # # In BBEdit/TextWrangler Preferences/Unix Scripting # check “use UTF-8” for Unix Script I/O. # # When running in Terminal set Window Settings... # [Display] [Character Set Encoding] to “Unicode (UTF-8)”. # ### use utf8; # NO !! # no encoding; # OK, optional # binmode STDOUT, UTF-8; # OK, optional ### binmode STDOUT, :utf8; ### NO !! Quite different !! use Encode qw~from_to~; while (DATA) { /^#/ and next; from_to ($_, Shift_JIS, utf8); print } __DATA__ # Must not contain non-Shift_JIS characters 空欄を埋めたり、完全な文書で質問に答えたり、 一番適切に思う解答を〇で記したりする。 ## That's a nice little script to have on the list, for reference. Now, as far as my little problem goes, I was able to get some success with the following: -snippet-- use encoding( 'Shift_JIS' ); ... my $query = new CGI; ... my $fileToSend = $query-param( 'file-to-send' ); my $FileSent = $query-param( 'FileSent' ); ... elsif ( $FileSent ) { my $fh; if ( !defined( $fileToSend ) || length( $fileToSend ) 1 || !( $fh = $query-upload( 'file-to-send' ) ) ) { print $query-header(-status=$error), $query-start_html( 'Bad request' ), $query-h2( 'Failed to find or open file, maybe bad file name selected.' ), $query-strong( Upload request for $fileToSend not processed. ); exit 0; } my $type = $query-uploadInfo( $fileToSend )-{ 'Content-Type' }; if ( $type ne 'text/plain' ) { print $query-header(-status=$error), $query-start_html( 'Bad file type' ), $query-h2( 'File type must be plain text.' ), $query-strong( 'Request not processed.' ); exit 0; } # One line at a time is STILL not safe if length not already checked. # Doing this one line at a time to handle the shift JIS problem, somehow. my @fileLines = (); my $line = ''; # binmode( $fh, :raw :encoding(Shift_JIS) ); binmode( $fh, :raw :utf8 ); # As best as I understand, this should be wrong. # binmode( $fh, :raw ); while ( $line = $fh ) { my @hexdump = unpack( 'C256', $line ); # debug my $hexdumpstring = ''; # debug foreach my $byte ( @hexdump ) # debug { $hexdumpstring .= sprintf( '%02x ', $byte );# debug YUCK! } # debug push( @fileLines, $line ); push( @fileLines, $hexdumpstring . \n );# debug } @words = @fileLines; ... ---end-snippet This is in spite of the headers, the XML declaration, and the HTML header meta declaration all declaring the document to be shift-JIS, and the source itself declaring use encoding( 'Shift_JIS' );. I should probably expect that I muffed it when I compiled perl, but I'll need to push the whole thing onto my Linux/BSD box, bring up apache over there, and compare notes to have a decent idea what's going on. In the meantime, Firefox on Linux is no longer uploading the file at all. Joel
Re: incantation for uploading shift-jis files
erm I think I forgot to point out what I changed -- Now, as far as my little problem goes, I was able to get some success with the following: -snippet-- use encoding( 'Shift_JIS' ); ... my $query = new CGI; ... my $fileToSend = $query-param( 'file-to-send' ); my $FileSent = $query-param( 'FileSent' ); ... elsif ( $FileSent ) { my $fh; if ( !defined( $fileToSend ) || length( $fileToSend ) 1 || !( $fh = $query-upload( 'file-to-send' ) ) ) { print $query-header(-status=$error), $query-start_html( 'Bad request' ), $query-h2( 'Failed to find or open file, maybe bad file name selected.' ), $query-strong( Upload request for $fileToSend not processed. ); exit 0; } my $type = $query-uploadInfo( $fileToSend )-{ 'Content-Type' }; if ( $type ne 'text/plain' ) { print $query-header(-status=$error), $query-start_html( 'Bad file type' ), $query-h2( 'File type must be plain text.' ), $query-strong( 'Request not processed.' ); exit 0; } # One line at a time is STILL not safe if length not already checked. # Doing this one line at a time to handle the shift JIS problem, somehow. my @fileLines = (); my $line = ''; # binmode( $fh, :raw :encoding(Shift_JIS) ); This is what seems to get it to upload from the iBook to itself: binmode( $fh, :raw :utf8 ); # As best as I understand, this should be wrong. The debug stuff below didn't really tell me much beyond that it was already not shift-jis by the time the script was reading it from the CGI upload function. # binmode( $fh, :raw ); while ( $line = $fh ) { my @hexdump = unpack( 'C256', $line ); # debug my $hexdumpstring = ''; # debug foreach my $byte ( @hexdump ) # debug { $hexdumpstring .= sprintf( '%02x ', $byte );# debug YUCK! } # debug push( @fileLines, $line ); push( @fileLines, $hexdumpstring . \n );# debug } @words = @fileLines; ... ---end-snippet This is in spite of the headers, the XML declaration, and the HTML header meta declaration all declaring the document to be shift-JIS, and the source itself declaring use encoding( 'Shift_JIS' );. I should probably expect that I muffed it when I compiled perl, but I'll need to push the whole thing onto my Linux/BSD box, bring up apache over there, and compare notes to have a decent idea what's going on. In the meantime, Firefox on Linux is no longer uploading the file at all. Joel
Re: incantation for uploading shift-jis files
On 2005.12.25, at 10:58 AM, John Delacour wrote: At 1:45 am + 25/12/05, John Delacour wrote: The script is here, warts and all: http://bd8.com/temp/upload.pl.txt I note your are running both the script and the HTML in Unicode UTF-8. There is wisdom in that, of course, and I may rethink my choice of running these scripts in shift-JIS. (I like to avoid conversions that require tables and context decisions as much as possible.) I also notice you are saving the file back to disk so you can re-open it as shift-JIS. I want to avoid that, since perl is already saving it once to a temporary directory anyway, and since I hope the guy who wrote that code covered more corners than I have time to think of. (Muttering to self -- can perl open strings as streams like Java?) NB. Safari doesn't treat it as as text file. Didn't mention this before, but if you move the header here doc down far enough (put it in a sub, declare the sub after all the rest of the script), you can usually get Safari to behave itself. It's disappointing to discover this old bug in Safari, and it's disappointing that it's still there in Safari 2. (Didn't really intend that comment to go off-list, particularly since it actually is on topic for this list,) FireFox, Opera, Omniweb display it properly as text. If Omniweb isn't doing this, the problem would not seem to be in the KHTML base. Of course, KDE just passes the view off to the selected text editor, so the code to have the problem probably wouldn't be in the base anyway. (I like Omniweb, but I do have good reasons for using Safari on Jaguar, mostly financial -- no room to put it since I had to put the old 5.6G drive back in, no money for new hardware.) Joel Rees
Re: incantation for uploading shift-jis files
I think that the problem is that I have set the encoding (multi-part) for the post, but not for the file part, and I can't figure out how to set the encoding for the file part. I'm worried that I'm going to have to force a re-conversion within perl from some sort of automatic conversion done when writing to the temporary file. I do not think there is any way to tell the web browser what the encoding of the uploaded file should be. That only works for form fields such as text areas. So it's not just me that thinks so. Thanks. You will get the file in the same encoding (if applicable, this could be a binary file, such as an image, too) that it has on the user's hard disk. That doesn't seem to match some run-time error messages about codes that could not be converted. I'll have to check those messages again, there's probably a clue in there. So you will have to auto-detect the encoding on the server-side or give the user a pulldown to select the file encoding (or support only Shift-JIS, which you might get away with in your case). Do you know of a way to tell perl, or, rather, the CGI module to open the file handle as shift-JIS? Here's where I get the file handle: if ( !defined( $fileToSend ) || length( $fileToSend ) 1 || !( $fh = $query-upload( 'file-to-send' ) ) ) I'm not seeing any room in the syntax here: $fh = $query-upload( 'file-to-send' ), but I'm SUCH a newb. (Seriously. If I weren't bald I should be blonde.) Merry Christmas, Merry late Christmas and happy holidays to all! Joel Thilo
incantation for uploading shift-jis files
Anyone know? I've looked around on the web, and it looks like I'm playing with edge-of-the-world stuff and rather OS and browser dependent. The source I'm working with: http://reiisi.homedns.org/~joel/cs/ranbunhyou/withfile2.text http://reiisi.homedns.org/~joel/cs/ranbunhyou/requester2.html Both have shift-JIS mixed in, but everything in Japanese is duplicated in English. The form tag: form action=http://reiisi.homedns.org/~joel/cs/ranbunhyou/cgi/withfile2; method=post enctype=multipart/form-data; charset=Shift_JIS accept-charset=Shift_JIS The charset in the enctype seems to make the textarea field work intermittently, the accept-charset makes the textarea field work reliably independent of the charset in the enctype. The charset reported by the debug print always comes out ISO-8859-1 even when it works, so I think that's a red-herring. The test file that contains Japanese to show the problem is Sample_questions.text. The linked file works fine, The contents of the linked file pasted into the textarea works fine. It's just the file upload that does not. (The reason I'm using shift-jis is that the bulk of Japanese documents remain in shift-jis, and most people operate by default in the encoding that the bulk of their files are stored in.) My partial explanation of what the code is doing is at http://reiisi.homedns.org/~joel/cs/ranbunhyou/ (And, yeah, I know I'm using some archaic techniques. It's what I know or can find in my old references, and/or what I hope will run with modules in the standard distribution. Criticism welcome.) joel rees
Re: incantation for uploading shift-jis files
On 2005.12.24, at 09:24 PM, John Delacour wrote: At 7:10 pm +0900 24/12/05, Joel Rees wrote: I've looked around on the web, and it looks like I'm playing with edge-of-the-world stuff and rather OS and browser dependent. The source I'm working with: http://reiisi.homedns.org/~joel/cs/ranbunhyou/withfile2.text http://reiisi.homedns.org/~joel/cs/ranbunhyou/requester2.html Can you reduce the problem to the bare minimum rather than requiring us to plough through the whole thing? I'll see if I can work something simple up. (Now that you mention it, that's what I should have done before posting, since simplification generally reveals simple problems.) But I'm going to take a break for Sunday, try to spend time with the family. When I try to Send File, I get : Failed to find or open file, maybe bad file name selected. : Upload request not processed. Did you select a file before hitting send? I'm new to file uploading, so I have no idea if I'm tripping over something basic here. Maybe something as simple as telling the web browser to send the thing as binary instead of text. (If I can dig that back up.) I think that the problem is that I have set the encoding (multi-part) for the post, but not for the file part, and I can't figure out how to set the encoding for the file part. I'm worried that I'm going to have to force a re-conversion within perl from some sort of automatic conversion done when writing to the temporary file. I don't think it is edge-of-the-world stuff but I like simplified problems without noise. One can always hope. ;-/
Re: Delay in BBEdit/TextWrangler
On 2005.12.7, at 04:10 AM, John Delacour wrote: The script below prints a list of 34 Burmese characters. I happen to have a font for these but I'm not sure that matters. If I run the script in BBEdit or TextWrangler just after launching the apps, there is a huge delay before the output is printed (up to 15 seconds) but subsequent runs produce no special delay. First guess is font caching, which is mostly the time to find and load glyphs. It looks like you might be also implicitly invoking the relevant parsing attribute tables, which will also take some time to find and load. #!/usr/bin/perl binmode STDOUT, q~:utf8~; for (4096..4129) { $c = chr(); $text .= qq~$_\t$c$/~; } print $text; I get the same sort of behaviour if I run the script in Script Editor or Smile as a shell script, but there is no delay running it in Terminal. Perhaps this reflects that terminal has advanced quite a bit since jaguar, to the point of pre-loading many of the relevant tables? Perhaps Script Editor and Smile are also loading some sort of run-time interpreter which has to do its own caching of font and parsing tables? Perhaps those two are using Java, maybe? Java does a lot of pre-flight checking both syntax and rudimentary semantics for security purposes. Perhaps they are loading a separate copy of the perl interpreter? Can anyone explain what causes this delay? Good question. JD
Re: Multi-lingual ShuX
On 2005.10.31, at 08:18 PM, Sherm Pendley wrote: They say a picture is worth a thousand words - so here's a picture: http://camelbones.sourceforge.net/screenshots.html Cool. Taken from the latest CVS version of ShuX, available as a snapshot from the CVS Downloads page. The snapshot is pretty rough around the edges in other respects, but I thought this was worthy of note. sherm-- Cocoa programming in Perl: http://camelbones.sourceforge.net Hire me! My resume: http://www.dot-app.org
Re: Problem installing XML::Parser
I'm pretty sure I successfully installed expat with Fink There you go then. Expat is installed, but in a non-standard directory. My impression is that Fink is a little too user-friendly for many perl purposes?
Re: Using Perl in Cocoa
On 2005.10.27, at 12:27 PM, Manfred Bergmann wrote: Ok, that worked. Thanks. Hmm, how come that I couldn't find any documentation about this? All I found was a little example code on a japaneese internet site where you couldn't read anything except the code snippet itself. :) This was, as you said, an old example with CBPerlObject but it gave me a hint how to begin at all with that. Or maybe I am just incapable of searching in the internet. ;) I figured CamelBones is pretty nice, not only for doing complete Perl-Cocoa applications for which a lot of examples exist. Too many of us who would like to use it don't have enough time to, which is a shame. [...]
Re: How to find out if an application is running
On 平成 17/10/13, at 9:06, Sherm Pendley wrote: On Oct 12, 2005, at 7:47 PM, Ted Zeng wrote: BTW, Apple just reported yesterday that Mac units shipped increased By 48% last quarter. Sound like Mac is gaining market share. Yeah - and AAPL immediately dropped 10%. I'll *never* understand Wall Street. People who make money out of money have strange superstitions. Good news is bad news. (Or, if you want to play with conspiracy theories, ... ) I suspect what's happening is that the expected drop before the shift to intel is not happening. Rather than seeing the steady sales as a confirmation of Mac OS X, they are worried that Steve might decide to keep both CPU lines, messing up their misguided plans to capitalize on the next big monopoly. (My delusions, of course, do not necessarily reflect the views of my employer.) Joel Rees [EMAIL PROTECTED] digitcom, inc. 株式会社デジコム Kobe, Japan +81-78-672-8800 ** http://www.ddcom.co.jp **
Re: build problems with metadata...
(Sorry about the double, David. Either AppleMail needs a reply-to-list button or I need to wake up before I post.) On 2005.10.8, at 03:03 AM, David Cantrell wrote: On Fri, Oct 07, 2005 at 06:25:10PM +0100, William Ross wrote: (in which Tiger's newly metadata-enabled command line apps, such as tar, silently add files like ._Makefile.PL to get around apple's complete failure to put together a coherent strategy for managing file information, which MakeMaker sensibly tries to run, and fails. See eg http://caseywest.com/journal/archives/004393.html) Death to Apple! heh. I'm reluctant to install Gnu tar. I imagine that Apple is about to do something that relies on tar unpacking metadata. Is that silly of me? Yes and no. It's a potential problem until the industry establishes some standard to deal with metadata when flattening files for transmission over (metadata ignoring) media. And that means we need to understand the problem. Hate to ask the obvious, but did you try man tar? (If I were at work I could check to see if it would be worthwhile, but I have no 10.4 boxes at home.) I scrolled past some of the SPAM and noticed a message that seems to indicate to me that the problem might not exist on a case sensitive volume. If you leave the system tar(1) alone, and install GNU tar elsewhere (eg in /sw if you use fink) *and* if you don't edit any system config stuff like $PATHs, just edit your own $PATH in your .bash_whatever files to put GNU tar first in your $PATH, then you *should* be OK. Treat it as you would perl, where you would install your own perl build and leave Apple's own perl well alone. This is the correct approach if you must have a standard tar, as opposed to what the blog you linked (minus the trailing paren) suggests.
Re: Speaking of CPAN
Oh. I should say thanks, Bill. I tried it again and it worked this time. On 平成 17/09/01, at 1:54, Bill Stephenson wrote: On Aug 31, 2005, at 9:49 AM, Joel Rees wrote: anyone besides me having trouble accessing the mirrors? Yeah, I had something very similar happen to me last week. I can't remember for sure how I got it working but I think it prompted me to edit or add a valid cpan mirror to download modules from. I went to http://cpan.org/SITES.html and copy and pasted a url from the list of sites and it worked after that. Used ping and the traceroute tool in the GUI network utility to figure out that all the Japanese mirrors were up except the one with the official sounding name. Then I put all five of the mirrors that were up in the mirror list using o conf, and tried again. Just let it run, and it finally seemed to take hold on the fifth mirror. Seemed like it kept dropping connections and changing mirrors on every download, but I just left it mostly to itself, and it loaded the CPAN bundle in about a half hour or so. Had to hit the enter key a couple of times to confirm on loading dependencies, but that was about it. I even got LWP and Crypt::SSLeay loaded before I went to bed. Sorry I couldn't be more help... It did the trick. Kindest Regards, -- Bill Stephenson 417-546-8390 Joel Rees [EMAIL PROTECTED] digitcom, inc. 株式会社デジコム Kobe, Japan +81-78-672-8800 ** http://www.ddcom.co.jp **
Speaking of CPAN
anyone besides me having trouble accessing the mirrors? I've been rebuilding my iBook when I can spare ten minutes here and a half hour there. I installed perl 5.8.7 several weeks back under /usr/local and started CPAN with the path set to look at /usr/local/bin before /usr/bin, then remembered I wanted to install gnupg first, so I aborted and installed gnupg today and tried to o conf init. Got this at the end, about two hours ago: terminal output You have no /usr/local/share/perl/.cpan/sources/MIRRORED.BY I'm trying to fetch one LWP not available CPAN: Net::FTP loaded ok Fetching with Net::FTP: ftp://ftp.perl.org/pub/CPAN/MIRRORED.BY Couldn't cwd pub/CPAN at /usr/local/lib/perl5/5.8.7/CPAN.pm line 2260, STDIN line 28. Fetching with Net::FTP ftp://ftp.perl.org/pub/CPAN/MIRRORED.BY.gz Couldn't cwd pub/CPAN at /usr/local/lib/perl5/5.8.7/CPAN.pm line 2260, STDIN line 28. Issuing /usr/bin/ftp -n Trying 209.120.136.27... Connected to ftp.cpan.ddns.develooper.com. 220-=(*)=-.:. (( Welcome to PureFTPd 1.0.11 )) .:.-=(*)=- 220-You are user number 1 of 500 allowed 220-Local time is now 08:29 and the load is 0.03. Server port: 21. 220-Only anonymous FTP is allowed here 220 You will be disconnected after 15 minutes of inactivity. 331 Any password will work 230 Any password will work Remote system type is UNIX. Using binary mode to transfer files. Local directory now /usr/local/share/perl/.cpan/sources 250 OK. Current directory is / 250 OK. Current directory is /pub 550 Can't change directory to CPAN: No such file or directory 200 TYPE is now 8-bit binary local: MIRRORED.BY remote: MIRRORED.BY 500 Unknown command 227 Entering Passive Mode (209,120,136,27,156,117) 550 Can't open MIRRORED.BY: No such file or directory 221-Goodbye. You uploaded 0 and downloaded 0 kbytes. 221 Logout - CPU time spent: 0.000 seconds. Bad luck... Still failed! Can't access URL ftp://ftp.perl.org/pub/CPAN/MIRRORED.BY. Please check, if the URLs I found in your configuration file () are valid. The urllist can be edited. E.g. with 'o conf urllist push ftp://myurl/' Could not fetch MIRRORED.BY CPAN.pm needs at least one URL where it can fetch CPAN files from.
Re: Sendkeys
http://www.apple.com/macosx/features/applescript/ You'll find links to most of what you need there. The rest will be on Adobe's site, or on AppleScript mail lists. As has been mentioned, there is probably no reason to resort to perl and the Carbon API. On 2005.8.30, at 01:45 PM, Vince wrote: I had posted this message earlier to the Perl Beginners group: ** Here's the situation: Some dude working in my company needs a program that does this- -There are over a 100 PDF files in some folder on a Mac Machine -The program to be written is supposed to invoke Adobe Acrobat Professional 7 (Mac version) and open each file from the directory -Once each file is opened in Acrobat 7, it is supposed to push Alt + E (or similar) and then 'n' (or similar) in order to enable commenting on the PDF file. -The program is then supposed to push Alt +F and then 'S' to save the file -This is to happen for all the files in that folder. Someone supposedly told the dude that this script in Perl was easy to write! But, un/fortunately, I have no idea where to begin. All I know is that, if I am doing it on a PC, I need to add the Win32 package and then study sample programs which should get me going some place. But, I don't know if Win32 would work on Mac or if there is an easier way to approach this! I would be grateful for suggestions / directions on how to proceed. ** I was pointed to this group and this is what I think I need to do: a) Find the equivalent of Win32 package for Mac b) Write the program But, here are my other concerns: a) I've pretty much never worked on a Mac and don't possess one. In order to develop programs I use a PC. I was wondering if Macperl could be simulated on a PC? Thanks for any advise. Vince
Re: Authentication woes...
The program is about 600 lines of C code That's not all that difficult. I kind of wish you hadn't done that, but since I didn't warn you off earlier, I guess I shouldn't complain. The usual way to attack this sort of problem is to update the module that has fallen behind and reset all the passwords by hand. Good OS design intentionally puts roadblocks in the way of direct password recovery.
extras.make -- ignored
I searched the web and found that these are normal errors on Cygwin, but what does that have to do with us? Is this just tracks from the errors in BDB (which I also have)? -- ### Bourne-style shells, like bash, ksh, and zsh, respectively. u=31 s=0 cu=1009.62 cs=206.79 scripts=878 tests=99899 make[2]: *** [_test_tty] Error 1 make[1]: *** [_test] Error 2 make: *** [test] Error 2 -- Making Errno (nonxs) make[1]: [extras.make] Error 1 (ignored) Everything is up to date. Type 'make test' to run test suite. if [ -n ]; \ then \ cd utils; make compile; \ cd ../x2p; make compile; \ cd ../pod; make compile; \ else :; \ fi ./perl installperl --destdir= WARNING: You've never run 'make test' or some tests failed! (Installing anyway.) -- make[2]: [extras.install] Error 1 (ignored) --
Re: backup strategy questions
On 平成 17/08/10, at 1:29, Joseph Alotta wrote: Greetings Everyone, I have been using psync to back up my entire disk (about 20GB) to a local hard drive (300GB). Previously, I had been updating the backup drive with changes, but I was concerned that all history was unrecoverable. So I wrote a little program that creates a new directory each time (ie, 2005-08-09) and does a full backup using psync to the directory. My question: In the event of a hard disk failure, will I be able to boot from a full copy in a directory? How would I be able to recover? Full copy of your boot volume? My impression is that it's very difficult to successfully make a bootable image of a bootable volume. (I've booted an image, but have not done any really rigorous testing, and that was v 10.0.) There are ways to make images that can be restored, and there are ways to replicate but my understanding is that even the command-line ditto (and CpMac) may not succeed perfectly. Joel Rees [EMAIL PROTECTED] digitcom, inc. 株式会社デジコム Kobe, Japan +81-78-672-8800 ** http://www.ddcom.co.jp **
Re: problems with WWW::Curl, perl 5.8.7, Mac Mini, 10.4.2
On 2005/07/22, at 13:06, Joel Rees wrote: I installed 5.8.7 on a Mac Mini running 10.4.2 under /usr/local, executables in /usr/local/bin, used .bash_profile to add /usr/local/ bin to the front of my path (bleaugh). (Hesitated about installing gpg, but trying to build it gave me a huge bunch of integer precision errors with the sign bit so I decided to just move ahead. Does cpan use openssl if it doesn't find gpg? Or does it just not bother to check the checksums/ signatures?) sudid cpan, with the .cpan file under /usr/local/share/perl. (Is that reasonable?) Tried to install WWW:Curl from the cpan prompt. At the make test stage, I get about twenty occurences of Bareword CURLOPT_MUTE not allowed while strict subs in different files, and cpan kindly refuses to complete the install. 105 out of 108 failed. I don't see any other errors. I'm debating doing a parallel install of the latest curl into /usr/ local/bin. Anyone have a better suggestion? Just for the record, I did try a parallel install of curl, with the same results. Tried contacting the author, as Ken suggested, no response yet. -- Joel Rees [EMAIL PROTECTED] digitcom, inc. 株式会社デジコム Kobe, Japan +81-78-672-8800 ** http://www.ddcom.co.jp **
Re: building 5.8.7 on 10.4
On 2005/07/07, at 22:39, Dominic Dunlop wrote On 7 Jul 2005, at 12:57, Joel Rees wrote: lib/localeFAILED at test 99 This is perfectly normal. Unfortunately. The problem is that Mac OS X 10.4 ships with more locale definitions than previous versions, and eu_ES, one of the new locales, has a really weird (read, buggy) value for the decimal separator -- \' . This confuses perl into believing that a number using this string in place of a decimal point is two numbers, and a test which runs through various features of every installed locale fails. It's in the perl bug database as #35895, and I reported it to Apple as their bug ID# 4139653. But I have not had any feedback from Apple so far. Okay, I tried the make test VERBOSE, as Ken suggested, and it reports the file you mention. I suppose I could simply fix that myself, if I knew the decimal separator should be the comma. But I'm not using Spanish, so I'll forego learning where Apple hid the locales today. (Hmm. I see / System/Library/LocalePlugins, but that only has something apparently for Thai text breaks.) Install completes without complaint. Thanks. joel
Re: building 5.8.7 on 10.4
2005-07-07 (木) の 10:56 -0500 に Ken Williams さんは書きました: On Jul 7, 2005, at 5:57 AM, Joel Rees wrote: Just accepted all the defaults. sudo make gave one error: [...] cc -L/usr/local/lib -force_flat_namespace -o miniperl \ miniperlmain.o opmini.o libperl.a -ldl -lm -lc ./miniperl -w -Ilib -MExporter -e '?' || make minitest make: [extra.pods] Error 1 (ignored) [...] and make test You shouldn't run sudo make and then make test without sudo. Heh. I was in a hurry last night, didn't watch what I was copying from the saved text. I did indeed try to make test without the sudo first, but I figured that out immediately. The first step may create items that the second step can't deal with. In general the only step that you should use sudo for is sudo make install. All previous steps should be done as your regular non-privileged user. Well, I have the thing in the admin user's local directory, where I would expect there should be even less problem, and make still wouldn't run without sudo. I wonder why. -Ken Anyway, I should be able to just install it, then? Thanks, Joel
Re: question on Perl's ssh
On 2005.6.25, at 08:35 AM, Ted Zeng wrote: My problem is it provides much more than I need. Are your sure? I just need to log in to another Mac to execute commands there. I don't care securities, or anything. rlogin is good enough for me. Well, there is a company taking contracts from the credit card companies that just needed some real world data to test something with and wasn't going to have that data where it could be accessed improperly. And how many hundreds of thousands of credit card numbers worldwide have been stolen? I don't want to copy a public key over to that machine ( I don't have to in Panther. But have to in Tiger) This doesn't really make sense. Are you sure that you didn't connect by hand in pather once, and answer the query about whether you would accept the certificate or not? Once the certificate was accepted, perl in Panther might have been accessing the user's .ssh? Are there any environment files that didn't get transferred or rebuilt, or, if you just upgraded, were there some settings files that got erased? before I could talk to that machine. This is a nightmare to me. Eventually, this kind of thing will get straightened out a bit. But we who write programs still need to be aware of when we need to identify and when we need to encrypt and when we can just spit data. (If we don't, who will?) -- Joel Rees msn.com and hotmail.com users take note -- Microsoft wants to refuse my mail if I don't use SenderID starting November. SenderID was refused as an internet standard and does not stop SPAM, and it contains a Microsoft patented algorithm. Draw your own conclusions.
Re: wildly off topic, sorry, was Re: question on Perl's ssh
On 2005.6.25, at 09:55 AM, Chris Devers wrote: On Sat, 25 Jun 2005, Joel Rees wrote: msn.com and hotmail.com users take note -- Microsoft wants to refuse my mail if I don't use SenderID starting November. SenderID was refused as an internet standard and does not stop SPAM, I suppose I should twist my mouth properly when I say that. Maybe, SenderID was put on a non-standards track by the internet task force and does not stop SPAM,? and it contains a Microsoft patented algorithm. Draw your own conclusions. Not that I'm convinced it's a good idea, but... IETF Approves SPF and Sender-ID Yeah, that's the typical slashdot leader. Reader beware, do your own research. As noted early in the threads there, Experimental is not standards-track. Watch out for the spin. Posted by Zonk on 2005.06.24 15:57 from the protocols-forward dept. NW writes According to the records in the IETF's database (here and here), both the SPF and Sender-ID anti-spam proposals were tentatively approved by the IESG (the approval board of the IETF) as experimental standards. It remains to be seen whether any of them will actually put a dent into spam. At the same time, the FTC has opened a central site about email authentication. http://slashdot.org/article.pl?sid=05/06/24/1921210threshold=5 https://datatracker.ietf.org/public/ pidtracker.cgi?command=view_iddTag=12662rfc_flag=0 https://datatracker.ietf.org/public/ pidtracker.cgi?command=view_iddTag=12542rfc_flag=0 http://spf.pobox.com/ http://www.microsoft.com/mscorp/safety/technologies/senderid/ default.mspx http://www.ietf.org/iesg.html http://www.dmnews.com/cgi-bin/artprevbot.cgi?article_id=33190 And the last one is the one I thank you for. I'll be sure to share my results with the Federal Trade Commission. -- Joel Rees Getting involved in the neighbor's family squabbles is dangerous. But if the abusive partner has a habit of shooting through his/her roof, the guy who lives upstairs is in a bit of a catch-22.
Re: question on Perl's ssh
[...] This doesn't really make sense. Are you sure that you didn't connect by hand in pather once, and answer the query about whether you would accept the certificate or not? Once the certificate was accepted, perl in Panther might have been accessing the user's .ssh? I reboot the machines and map the disk image to the drive every time. In Panther, the disk image doesn't have .ssh. Hmm. So maybe the ssh libraries on Panther and Tiger default to different behavior when the user's settings are missing? I do set up an account and password in the disk image. I use password authorization to access Panther. Meaning, the script takes the password (passphrase?) query and answers it? But I could not do the same with Tiger. The perl script always ask me to type in password by hand. Is it possibly sending the query on STDERR instead of STDOUT? I think perl would allow you to redirect STDERR, if that's the case. But my goal is automation. To avoid this, I have to put a public key to the target machine and use identity key authorization to access it. Yeah, keys certificates can be a pain to deal with in their current form. Would a self-certificate help? Openssl can do self-certificates as well as keys. (Maybe you've already tried that?) [...] Eventually, this kind of thing will get straightened out a bit. But we who write programs still need to be aware of when we need to identify and when we need to encrypt and when we can just spit data. (If we don't, who will?) The problem is I know I just need to spit data, but I couldn't. Well, you are the best person to know whether the project wil live long enough to escape its cage, etc. I'd suggest compiling and installing rsh or some such, but I really shouldn't. I think installing keys for ssh will work out simpler in the end. -- Joel Rees If Microsoft is effectively taxing the internet, should we call it a sin tax?
Re: Parsing UTF8 files with wide characters
(BOn 2005.6.16, at 05:13 AM, John Delacour wrote: (B (B At 4:26 am +0900 16/6/05, Robin wrote: (B (B I went back to look at perluniintro because I was sure I could (B remember reading that the "use utf8" pragma was no longer needed, (B right under where it says this it continues "Only one case remains (B where an explicit "use utf8" is needed: if your Perl script itself is (B encoded in UTF-8" (B (B Nevertheless (Perl 5.8.6) if you simply comment (B (B #binmode (DATA,":utf8"); (B #binmode (STDOUT,":utf8"); (B (B provided your script is UTF-8 encoded, there is no need for 'use (B utf8'. The script you posted works fine in that case, (B (BNot a good idea. For the time being, and until UTF-8 is established as (Bthe default encoding for perl (should that ever happen), when your (Bsource code includes multibyte characters tell perl so. (B (BI suppose, in a context where you have automatic encoding conversion (Btaking place whenever you move code from one environment to another, (Bthis rule of thumb would not be a rule of thumb. But otherwise, you (Bwant to do what you can to tell the various things that interpret your (Bcode what the encoding is. (And blind automatic conversion has its own (Bset of problems.) (B (B as does (B (B (B $f = "$ENV{HOME}/junk.txt"; (B open F, "$f"; (B print F "$B7n(B"; (B close F; (B open F, $f; (B for (F) {/$B7n(B/ and print} (B (B JD (B (B-- (BJoel Rees (BI've already left the building. You don't really see me here.
Re: OT-good DEDICATED hosting service
Just to clarify, On 2005.6.12, at 03:17 AM, Andy Holyer wrote: On 11 Jun 2005, at 18:57, Joseph Alotta wrote: Hi Joel, What does colocation mean? Colocation is when you set up a server on your own machine, and then pay an ISP or similar connectivity provider a rental to place your machine on their network, at their physical location (thus, co-location), connected to the Internet. Included in this is usually guaranteed 24/7 power and connectivity, security, and occasionally backups. And, in many cases, they can do some other aspects of the actual setup and maintenance, for a fee, of course. Installing an OS usually has a set price (when they do that). Just inserting the CD and getting it started while you do the rest remotely, and being on standby in case you need someone to read a message that went to the console instead of to the remote admin's terminal will usually be a bit cheaper. Many places will let you just bring the servers in once you've set them up, others may offer to purchase the servers for you (in which case colocation looks more like dedicated hosting). The two biggest providers of these in the UK are Telehouse and TeleCity, both in London's docklands. In these cases you rent a 19 rack cabinet in which you can place as many servers as you want. That really costs big, though. Some places also rent by the shelf/slot, and the places offering special services for the Mac Mini will rent a partial shelf. That's where this becomes relevant to this thread. A pair of Mac Minis is usually plenty for your average Mom and Pop retail or one man consultant shop (and even a little bit larger than that), and the prices for that are usually better than the cost of dedicated hosting, may even be as good as the better shared hosting. Of course, all of that may require factoring in using an OS other than Mac OS X server, and I don't recommend Mac OS X client or Darwin for servers unless you have the server under your physical control and are willing to do a lot of hand customizing, and know how to nurse a server yourself when necessary. -- Joel Rees even though much of what I do is not sensible it does make sense if you know why ...
Re: OT-good DEDICATED hosting service
On 2005.6.11, at 07:58 AM, Mark Wheeler wrote: Sorry if this is off topic, but I need to move a shared account to a dedicated account, but picking a new host provider is a need in a haystack. So I ask, where is a good place to go for a DEDICATED server hosting package? What are your good experiences? Have you considered colocation, or even running the servers yourself? The mac mini alters the economics of both doing it yourself and colocation significantly, for an awfully large number of applications. -- Joel Rees even though much of what I do is not sensible it does make sense if you know why ...
Re: ActiveState is announcing support for Mac OS X
IIRC, I think I actually used CPAN on ActiveState's perl on MSWindows when I was doing perl on MSWindows several years back. I think I dug an old version of nmake up in Microsoft's support site or something, didn't have anything that needed to be compiled. I'm not a big fan of ActiveState partly because of PPM and partly because of their license, but it did the job. -- Joel Rees Opinions are like armpits. We all have two, and they all smell, but we really don't want the other guy to get rid of his.
Re: ActiveState is announcing support for Mac OS X
On 2005.6.8, at 11:59 PM, Chris Devers wrote: On Wed, 8 Jun 2005, Sherm Pendley wrote: On Jun 8, 2005, at 9:41 AM, Janet Goldstein wrote: People would use ActivePerl for OS X for the same reason Windows users use ActivePerl Windows users use ActivePerl because Windows doesn't ship with Perl. FWIW, ActiveState Perl is also available for Solaris; they also make software available for AIX, HP-UX, etc. I'm not sure if these systems tend to ship with Perl, but I know that Perl often runs on them. In that light, I'm actually a little surprised that they didn't have a version for Mac OS X sooner than this. They have in the past. The timing of the announcement seems curious to me... WWDC? -- Chris Devers
[way OT] ... Intel? Maybe not.
On 2005.6.8, at 01:57 PM, [EMAIL PROTECTED] wrote: Hi Sherm. For those who don't know me, I'm the perl maintainer at Apple, and I admit I keep a low profile on this list. But I wanted clear up a few things: Well, Ed, I'm not Sherm, and I don't have any claim to fame, but I wish you could clear up why Steve would do something as insane as inserting Apple into the x86 monoculture. I'd have no complaints if Apple were offering Mac OS X86 boxes as a second line. I don't buy the megahertz myth, so I have no problem paying a little higher price for the PowerPC Mac Mini compared with an x86 of similar clock, even with the FSB rate a tenth of the CPU clock instead of a half. On the contrary, low average power on the Mac Mini fits it into the Japanese power budget just fine. The most frustrating part of Mac OS X is the lack of product range. For instance, I'd love a PPC box the size of the Mac Mini at half the spec and loaded only with Darwin, but with an extra NIC, for $300. (I'd by three at $200 each, but I'm trying to make a point here.) The current speed/power is only a serious detriment to a bunch of critics who won't be buying Macs anyway. (And, just between you and me, but I don't see why Steve is so enamored of Pentium M, especially without seeing whether iNTEL can actually push that piece of junk up to 64 bits.) Anyway, if you by any chance have a communication path up high enough to reach whoever decided that PowerPC had to be dropped, I'd appreciate it if you could be so kind as to pass on a request to keep the PowerPC line going as long as it doesn't just totally bleed red ink across multiple quarters. -- Joel Rees The master plan in open source is simple: The user figures out what he or she needs and does it.
[not really so way OT] ... Intel? Maybe not.
Sorry to catch you between my irritations and Steve. This isn't aimed at you, this is aimed at the decision makers at Apple. I'm just hoping someone upstairs will see this in this archive. On 2005.6.9, at 02:36 AM, Edward Moy wrote: I'm just a lowly engineer, so such decisions are way above me. I can only hope that the decision makers know what they are doing. From where I stand, they seem not to see the forest for the trees. Maybe Dvorak should be banned reading on the Apple campus. One thing is guaranteed, he is always wrong. And when he is right, he is dead wrong. Giving in to the monoculture mindset is the last thing Apple should do. If you believe that Apple can create products at the same price as a pc knockoff company down the street, you are going to be constantly disappointed. Apple does not build hardware; it builds systems. Two nics on a Mac Mini screams, Systems! Tweak the Mac Mini a little and it would be the perfect platform for any number of intelligent routers, and, yes, Apple is selling a router right now, so we know routers are on Apple's roadmap. Routers are a key point in any real systems solution, and routers that the customer can tweak would be a huge plus. Intelligent router means things like perl built in, by the way, so it isn't that far off topic. And, no, a wonderful OS is not a systems solution unless Apple can turn the corner here. You guys seemed to be turning straight into monoculture's defensive line, and those guys are huge and are going to tear you to pieces. That includes the software. Our overhead (such as my paycheck ;-) is always going to be higher because we have to pay for all the development costs. Not all, not be any means. Apple needs to learn to use their user community more effectively, and one thing that is not effective is suddenly saying, Hey, all you guys that were trying to avoid the monoculture by working with us, sorry, but you have to join us in the monoculture now. And because are systems require unique parts, created at a much lower volume than in the pc world, our hardware costs are also going to be higher. Fine. But Apple has a nice capital reserve, and that reserve has not been shrinking. Nor has Apple been losing position in the market, for all the weeping, wailing, and gnashing of teeth on the part of the pundits. We hope that the additional price our customers pay is justified by the fit-n-finish that we put into the systems. You can't add fit-n-finish without help from the customers. (That is one way of describing the entire meaning of the open source community.) As you say this OT, so I should not comment further on this. And neither should I have, but sometimes etiquette has to go by the board. Apple seems to be going backwards from the listen to the customer attitude that brought them this far. IBM may be paying too much attention to the game console market right now, and that may hurt Apple temporarily, but moving all the eggs to the iNTEL basket is a serious strategical error. Edward Moy Apple On Jun 8, 2005, at 8:48 AM, Joel Rees wrote: On 2005.6.8, at 01:57 PM, [EMAIL PROTECTED] wrote: Hi Sherm. For those who don't know me, I'm the perl maintainer at Apple, and I admit I keep a low profile on this list. But I wanted clear up a few things: Well, Ed, I'm not Sherm, and I don't have any claim to fame, but I wish you could clear up why Steve would do something as insane as inserting Apple into the x86 monoculture. I'd have no complaints if Apple were offering Mac OS X86 boxes as a second line. I don't buy the megahertz myth, so I have no problem paying a little higher price for the PowerPC Mac Mini compared with an x86 of similar clock, even with the FSB rate a tenth of the CPU clock instead of a half. On the contrary, low average power on the Mac Mini fits it into the Japanese power budget just fine. The most frustrating part of Mac OS X is the lack of product range. For instance, I'd love a PPC box the size of the Mac Mini at half the spec and loaded only with Darwin, but with an extra NIC, for $300. (I'd by three at $200 each, but I'm trying to make a point here.) The current speed/power is only a serious detriment to a bunch of critics who won't be buying Macs anyway. (And, just between you and me, but I don't see why Steve is so enamored of Pentium M, especially without seeing whether iNTEL can actually push that piece of junk up to 64 bits.) Anyway, if you by any chance have a communication path up high enough to reach whoever decided that PowerPC had to be dropped, I'd appreciate it if you could be so kind as to pass on a request to keep the PowerPC line going as long as it doesn't just totally bleed red ink across multiple quarters. -- Joel Rees The master plan in open source is simple: The user figures out what he or she needs and does it. -- Joel Rees Getting involved in the neighbor's family squabbles
Re: CamelBones on Intel? Maybe not.
On 2005.6.7, at 11:13 PM, Robert wrote: Wiggins d'Anconia [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Ian Ragsdale wrote: On Jun 6, 2005, at 5:18 PM, Joel Rees wrote: Jobs is insane. I'm not so sure about that. IBM seems unwilling or unable to produce mobile G5s, which is a market that Apple considers very important. They also are 2 years behind schedule on 3.0Ghz G5s, and appear to be focusing on video game processors instead of desktop and mobile processors. Apple might be OK in a speed comparison right now (on desktops, they are clearly losing in laptop comparisons), but how about in two years? Perhaps IBM has told Apple that they won't attempt a laptop chip, since the volume is way higher for video game consoles? What should Apple do? They should have released Mac OS X for Intel as soon as they had it ready. Why wait? It seems Apple is too caught up in their own keynotes to understand volume sales. One thing M$ was definitely *always* better at. IBM will probably laugh this one to the bank, not exactly going to put a dent in that $99 billion in revenue... Because it wasn't ready Five years and it still isn't ready? That's exactly why they shouldn't have kept it hidden in the lab if they were going to be doing it. and obviously after watching the keynote they are still working on some things. They are trying (and it looks good so far) to make the transition as painless as possible. I think it is a good move. If they were just saying, okay, we have had so many people begging for Mac OS X on iNTEL, we're going to give it to them and charge them double for running it on non-Apple hardware, that would be a good move. Moving everything to the monoculture is not a good move. Personally, it looks like it will be a bit painful for a few years, but a far better move in the long run. Unless they become just another cheap clone maker with a pretty software interface. (Did I hear someone say Sun?) Apple is not Sun in any sane comparison. You think? Ian http://danconia.org
Re: CGI script running as a given user?
On 2005.6.7, at 11:51 PM, Rich Morin wrote: I've got a Perl CGI script that acts as a system browser. For example, it can look at files and directories and say interesting things about them. This works fine, as long as the files are world-readable, but fails (because Apache runs as 'www') as soon as the user wanders into private areas. One answer to this is to launch a small-footprint web server that runs as the current user. The CGI would run under that server and all would be nifty and cool (well, not really, but OK :-). I'm wondering if I've overlooked a way to get Personal Web Sharing (aka Apache) to handle this for me. Something like have the user authenticate via https, then launch a given CGI script with that user's uid. There's an apache module that does exactly that. I think it's called suexec or something like that. But you want to read the documentation carefully, because it has a lot of security issues that you have to understand.
OT: no shine (Re: CamelBones on Intel? Maybe not.)
On 2005.6.7, at 05:47 PM, Sherm Pendley wrote: On Jun 6, 2005, at 6:18 PM, Joel Rees wrote: For me, the computer industry just lost its last little bit of shine. For me, it lost that shine years ago. When I began learning to program, everything was new. Every week, it seemed, someone was finding a new use for these gadgets. Games could be written by one person in two months. My heroes were people like Jobs, Wozniak, Nolan Bushnell, Eugene Jarvis, Richard Garriott, Sid Meier, and Roberta Williams - pioneers in every sense of the word. Shigeru Miyamoto deserves a place on that list too, but I didn't know his name back then, even though I greatly admired his work, without having a clue whose it was. These days, there's very little true innovation is going on. I hit that point with MSW3. The first tarnish was in realizing how few other people saw the magic I saw in FORTH. But it was MSW3 that opened my eyes to the fact that there really were a lot of people who really did want Bill Gates or somebody to do their thinking for them. Most of the effort is put into squeezing a few more pennies from the bottom line. Games are designed and produced by the same committee-driven process that has reduced Hollywood and the music industry to mockeries of their former selves. Things have changed, and the Almighty Buck is king now. Pragmatically, that's a good thing; it's a sign of progress towards a mature, stable industry. In another way, I can't help feeling that something valuable has been lost along the way. Any general purpose computers I buy will run AMD since I doubt I'll be able to afford PPC hardware, and I'll be scratching Mac OS X from this old iBook this weekend. Not sure if I'll load Linux or openBSD on it, since it's my server. Jobs is insane. I'm not sure I'd go quite that far. Monoculture. The only successful alternative OSses that run on x86 yet are entirely free (as in speech) and run on multiple platforms. Even FreeBSD is not just x86. I would not be going rabid if Steve had said, Okay, due to popular request, we're going to add an architecture. or something similar. Apple has the resources to sell to multiple architectures, although it would likely mean that they would need to open up quite a bit of the userland beyond the command line. There's a good business case to be made for switching, from Apple's perspective. Only if they have blinders and and don't notice anything wrong with the picture being dangled in front of their face. It will help the supply-side problems they've been having, and broaden the appeal of their products. Oh, sure. What is this thing about iNTEL having some sort of appeal? That''s a strawman, and the people who have been arguing it will not be buying it. IBM made a few too many forward looking statements without knowing how much the fancy non-RISC address modes (etc.) were going to cost in heat and timing. But, except for certain server software where the context switch overhead (FreeBSD's giant lock, the way I read it) drags the system down, the speed is close enough when you put Macs side-by-side with x86 boxes. The server speed problems will not be fixed with iNTEL, because it's from the OS's context switching overhead. Pentium D looks good in the lab, but I'm not going to let it eat _my_ lunch in the real world. And I do not want monoculture buffer overflows killing my servers. And Cell should not be a bad option, particularly if Apple's looking at a re-compile anyway. To most developers using Cocoa or Carbon, building a fat binary is painless - it's a matter of checking the right box in Xcode. The problem I'm facing is that for CamelBones, because of the way Perl builds its modules, the transition will be far more painful than it will be for most apps. It's going to be painful basically for everybody who isn't already compiling cross-platform, and, as you point out about Python, painful even with some that are compiling cross-platform. I'm not seriously considering a switch to Windows or Linux, or anything along those lines. I doubt I'll ever truly and completely abandon CamelBones, either. Basically what I'm considering right now is whether I can continue making CamelBones my primary focus, or whether I should shift it to the back burner for a while and focus on something more likely to help me either find a job or make a living on my own. Well, after all the rant, I have to admit that I hope you can get CamelBones moved onto the new platform okay. Just because I'm convinced it's going to crash and burn doesn't mean everybody should give up on it. -- Joel Rees (A FORTH dreamer imprisoned in a Java world)
[OT] Re: FORTH
I was just wondering what the magic was that you saw in FORTH. My understanding is that it is a very low level language. Have you ever played with LISP? Think of FORTH as LISP without parenthesis underneath everything, except that it never developed enough of a following to develop its own versions of Scheme or Dylan or ... Or perhaps it would make more sense to talk about integrating yacc into C's basic syntax and standard libraries. (Except that doesn't work at all.) FORTH needed a lot of work, and the current standard misses a lot of points, leaves you stuck with a reverse polish C and not-quite-unix libraries, and still no standard object format. Java tries to do what FORTH could have done and almost gets there, but as we all know, that last 20% is where schedules slip and budgets balloon. Whether FORTH could have answered the problems that you run into when you start trying to implement true context sensitive grammars or not is something I can't prove without fixing the problems nobody ever fixed in FORTH, but it should work better than languages that can only undo one dimension of context. On Jun 7, 2005, at 10:15 AM, Joel Rees wrote: These days, there's very little true innovation is going on. I hit that point with MSW3. The first tarnish was in realizing how few other people saw the magic I saw in FORTH. But it was MSW3 that opened my eyes to the fact that there really were a lot of people who really did want Bill Gates or somebody to do their thinking for them. -- Joel Rees (A FORTH dreamer imprisoned in a Java world)
Re: ActiveState is announcing support for Mac OS X
On 2005.6.8, at 04:24 AM, Lola Lee wrote: John Delacour wrote: Very nice and most welcome, though still not as easy as the Windows installation. May I suggest that you include at least the configuration notes in the distribution. Once I had returned to the AS site and found the necessary link, I was able to get 5.8.7 working without any trouble http://ASPN.ActiveState.com/ASPN/docs/ActivePerl/5.8/ install.html#os%20x%20configuration Well, this info is in the ReadMe note as well as in the installer, I believe. I ran it a couple hours ago and saw the note. Would it not be possible also to allow the user an option to adopt his current CPAN configuration? That would be nice, but maybe there's a reason why Active State did it this way? They're pushing their own alternative to CPAN. -- Joel Rees If God had meant for us to not tweak our source code, He'd've given us Microsoft.
Re: CamelBones on Intel? Maybe not.
I know what you mean, Sherm. Wish I could send you something to push into the iNTEL Mac world with, but I'm in the same position as you. Hope you can find a place that can see the value in understanding perl from the inside. If Perl 6 moves ahead, perl might go into the embedded world the way java hasn't yet really gone. For me, the computer industry just lost its last little bit of shine. I'm looking for a new career. Any general purpose computers I buy will run AMD since I doubt I'll be able to afford PPC hardware, and I'll be scratching Mac OS X from this old iBook this weekend. Not sure if I'll load Linux or openBSD on it, since it's my server. Jobs is insane. -- Joel Rees Nothing to say today so I'll say nothing: Nothing.
Re: Installing WebService::GoogleHack
On 2005.5.18, at 09:53 PM, Lola Lee wrote: [...] Now, $! . . . what does this do? I looked in perldebtut and it says that ! means, redo a previous command, but what is the purpose of $? And, where should I be putting this in, again? Just so this doesn't get lost in the wash, $! is a special variable. (Has nothing to do with the ! command explained in perldebtut.) The contents of the special variable $! is the text version of the error message of the last error which occurred. There's a whole swarm of these special variables that use the $ sign with punctuation, including $_ . There are even some that are hashes or arrays, rather than scalars, and begin with % or @ instead of $. For more information, type perldoc perlvar at the command line.
Re: Macperl list false advertising?
On 2005.5.3, at 11:04 AM, Marc Manthey wrote: On May 3, 2005, at 2:17 AM, Joel Rees wrote: Mac Perl - OS 7-9 and X discussion joel, have you heard from apple ? its a small company from cupertino that build some cool mahines. They started from system 7.0 to 10.4 and there was a major change from 9 to osx;) Thank you, marc. I do appreciate it when people let me know I have assumed something is more obvious than it is: http://www.perl.org/community.html == ... a href=http://lists.perl.org/showlist.cgi?name=macperl;Mac Perl/a - OS 7-9 and X discussion ... == The macperl Mailing List Name:macperl Summary:Main discussion list for MacPerl, the port of perl on Mac OS (Classic) The reason this comes up is a recent post to the macperl-forum and macperl-modules lists looking for some help compiling mp3::info, or for someone willing to share a pre-compiled module. I may be wrong, but I'm pretty sure he wanted this list instead of that. -- Joel Rees even though much of what I do is not sensible it does make sense if you know why ...
Macperl list false advertising?
Anyone know why http://www.perl.org/community.html describes the macperl list as Mac Perl - OS 7-9 and X discussion
Re: keychain
On 2005.4.21, at 09:15 PM, Ken Williams wrote: Hi Joseph, In my address book, I've got several of those too. I believe they're certificates from people who have signed their messages. If you don't know them, they're probably on a list you're on. That's definitely a possibility. It bugs me that Apple lumps things together like this because there's another possibility as well. If spam comes with a certificate, what do you suppose might happen? It's a bit of a pain, but I would prefer the keychain, in the default settings, prompted the user before storing any certificates. I'd also like to be able to set it to prompt before storing addresses, as well, but that's just something I can live with. When it stores certificates I don't know anything about, the chain of trust tends to have even less to do with me. Some things simply can't be mechanized. -Ken On Apr 19, 2005, at 11:31 PM, Joseph Alotta wrote: Hi Everyone, I looked at man security, just for kicks and I dumped the keychains. I was suprised to find email addresses for people who I do not know. I am a single user powerbook with dial up 56k access. Is this normal to have email keychain data for people I do not know? I could post those emails, but in case they're legitimate, I don't want them to get spammed. Any suggestions? Joe. On Apr 19, 2005, at 7:47 PM, Ken Williams wrote: Yeah, check out the 'security' command-line program. I use it in conjunction with Module::Release so that I don't have to type my PAUSE password every time I upload something to CPAN - it just fetches the password from my keychain. -Ken On Apr 19, 2005, at 5:01 PM, Larry Landrum wrote: I need to authenticate users in a perl CGI and was hoping to use the Keychain but can't find a perl way to do that. Has anybody done that before?
Re: Tiger version
As far as perl goes, I always install a parallel perl for my dev stuff, so I don't have to mess with the system install. If you are anxious for Tiger, you can pay USD 500 for the privilege of testing pre-release versions. If you do lot of development for the Mac, it's worth the price. I've done it once, just haven't been able to get involved enough in Mac development to pay for the next one. Found myself wondering what to do with all the CDs, also found myself wishing I could afford more hardware for running the pre-release software on.
Re: dealing with UTF8 text
On 2005.3.31, at 10:18 AM, Avi Rappoport wrote: Hi old friends (and new), I'm quite enjoying getting back to scripting, and like Perl a lot, especially with Affrus. While I'm probably inefficient, it's nice to have a language actually designed for text processing (search engine logs, in my case). However, I've got some Unicode issues and that seems to be platform-specific, so thought I'd ask here. Have you done perldoc perlunicode and used that as a lullaby for several afternoon naps in a row? Used the stuff referred there for a few more afternoon naps? (perldoc always seems to put me to sleep, but if I don't open it up and stare at it in spite of the soporific effect, nothing seeps in at all.) Have you gone to unicode.org and scanned what they have to offer relevant to the character ranges (languages) you need to be parsing? Have you looked up the traditional encodings for your language/locale, particularly the microsoft (bleaugh) code pages? (Google or your other favorite search engines can help.) I've done enough research to know that I should avoid hardcoded counting with positions and use the perl functions which will automatically handle utf8 characters properly. That's cool. I'm pretty sure I'm reading in utf8 and comparisons seem to work. Comparisons can seem to work when the encoding is all off, as long as the input is being munged the same way in all inputs. That doesn't mean it will work for all valid input, however. What I can't do is generate readable cross-platform output to show my clients. Nothing necessarily surprising there. It takes quite a bit of tuning your brain to get the code right. (I speak from experience with Japanese encodings. ;) Even opening the output in BBEdit as UTF8 doesn't convert the codes into properly rendered extended characters, and by the time it gets into Excel on their Windows workstation, all hope is pretty much gone. BBEdit, IIRC, handles some of the traditional encodings fairly well. (Does quite well with the Japanese encodings, at any rate.) So if you are opening UTF-8 and it isn't looking right, your output is probably not UTF-8. If you check the options in the file opening dialogs, you may find a way to convert from the actual encoding you're writing out. And/or you should be able to adjust your perl, but we can't help you with that unless we see some code and have some idea what encoding/language/locale you're trying to write out. Incidentally, in many of the traditional encodings, the basic Latin will be in the some positions (same code points) as UTF-8 Unicode basic Latin. The stuff that looks like HTML entities is fine when viewed in a browser: #1575;#1604;#1578;#1593;#1575;#1585;#1601; s#305;emens And if necessary, I can deliver in HTML. But my logs have characters like this in them: (from BBEdit as UTF8:) atualizao carreo (from BBEdit as Mac Roman) atualizao torunn tmmervold lschen I can tell they mean something, but I can't figure out how to make them readable. Help? TIA, Avi
Re: First CGI Setup
Should not try to give people advice at two in the morning. I said I've set each user's web-facing directories and files to owned by user, but group is the apache user. The directories that serve the domain root are owned by the apache user. Directory permissions are read/write/search (rwx) for owner, read/search (r-x) for group, no permissions (---) for others. And I failed to mention the permissions on the files. Putting the files in the apache user group allows you to remove the read (static html) and execute (cgi) permissions for others if you want, which shores things up a bit. I don't remember if Apple gives you an apache group, but that's easy enough to add with netinfo if they don't. And I said I personally am a bit of a bigot about file extensions. I don't use them except for perl because I don't have to, and because I prefer to have all my cgi in one place. But I should have said I don't use extensions with perl, only with php, because that's the way php is built. (But I don't use php at home, which is kind of ironic. :-/) My reason for confining executables to a specific set of directories is somewhat related to my reason for not using HFS on web-facing partitions. It allows me a greater level of confidence that I know which files the server is going to expose to the web and how. Less to keep track of. Less chance of mistakenly treating a non-cgi file as an executable, and less chance of spilling source code through some slip in the configuration. (And how's that for trying to keep this on-topic?)