Re: how to can measure my level of knowledge?

2024-05-31 Thread Rob Coops
Hi William,

Here are my two cents after nearly 30 years of writing code in many
different languages.

Writing in any functional language is exactly the same as in every other
functional language. The easiest way to learn and improve is by learning to
break down your problem into manageable steps. The simplest analogy I can
provide is the question and answer of: How do you eat an elephant? One bite
at a time.

Writing code is the thing you take a complex problem and break it down into
smaller chunks. Let's say I am asked to take an arbitrary git repository
and list all .txt and .png files in that git repository. I can just sit
down and begin writing but how would I do all that? I would first do it by
hand and record or memorize the steps I am taking. I would checkout the
repository, I would go through each directory and record all files that
have either a .txt or a .png extension, and provide the list of these files
and their path within the directory, maybe add their file size as well for
goot measure.
The next stage is writing maybe even in pseudo code or just pain text how
each step will work, lets take the crawling of the repo as an example of
what this would look like:

   - Put the full path of the directory where the repo was checked out on a
   list of directories to be processed
  - Read contents of the directory where I checked out the repository
 - For each file with a .txt and .png record the directory name a
 '/' an the filename
 - For each directory add them to the list of directories we need
 to process
  - Remove the now processed from the list of directories to be
  processed
  - Process the next directory on the list of directories to be
  processed
   - When no more directories are left to process we collected all files
   with the.txt and .png extension
   - Now we process the list of .txt and .png files and grab whatever
  additional information we need for each file
   - We now should have a structure with a full path for each .txt or .png
   file (the keys) in the repository with associated as values the information
   associated with each file
   - All that is left it is a simple matter of printing out the structure
   from the previous line one key at a time

Based on this list of steps above you can see where the loops would be, I
have indented those to make it easy to see which bits would go in a loop.
You now have the general structure of your code figured out and where you
will need loops you have an idea of what structures you will need to store
your data etc...

The above is completely language agnostic you can implement this in any
programing language all you need to do is figure out how to do each step
one at a time. You might not know how to list the contents of a directory
in your language of choice (in this case Perl) but that is easy enough to
look up. The same for finding the attributes associated with a file, again
this is easy enough to look up. Since you have this step by step plan you
can easily figure out how to do each bit and then just string them together.
I often add those above steps in my code as comments, and add the actual
code below each step. So I can see exactly what my goal is with each step.

Once that is done in your very first version of the code that produces the
desired results you stop walk away and have another look at it the next day
or so and clean things up as not a single person will write perfect code
from the start there always is something missing or something that you
would like to add (comments for instance). When done you have a working bit
of code that you can explain step by step as you have set down and
understood each step of the code. Having comments in the code is very
helpful for future you... when you look back at that code a year or two
from now you will be very happy to see the comments so you can understand
what you where thinking or how you broke down the problem back then when
you were still learning how to code in this language.

Keeping this code or even just the portion of the code that does the
walking through the directory structure in some code repository for future
reference will help you not have to reinvent that wheel in the future. As
it is highly likely that at some point a project will need something
similar but this time .pdf files are what you are looking for or maybe even
a user defined extension or all directories etc...

The most important thing about being a good developer in any language is
your ability to take a complex problem and break it down into small enough
parts that any monkey with a little bit of programming knowledge and a
search engine can figure out how to code each step.
Now as you get more experienced you might find that for certain things you
just know how to do that and you do not need to write it all down as you
can just do it in your head. But having worked with people that wrote
amazingly complex code form one of the developers 

Re: [getting PAUSE account]

2024-01-02 Thread Rob Coops
Just my two cents on this.

There is no sense in putting one language over another if it is
for personal use. In those cases use what you want and what you feel is
best suited to your needs. You might Choose Perl as you are familiar with
it and you need to process lots and lots of text for your project. On the
other hand you might want to use PHP because the thing that you are trying
to do is to extend an existing open source project with some cool function
and it happens to be written in PHP. Maybe python is your poison of choice
because you want to learn the language a bit better or feel that the
modules related to AI are really useful and more mature in Python then they
are in PHP and Perl. From that perspective it does not matter what you pick
as there is no one that will tell you you were wrong and even when you end
up telling yourself that it is only your time that you spend going down a
path less fruitful than you had initially expected.

But when you are looking at a business context you are most often forced
into a certain language because of reasons well outside of your control.
The project already is written in Ruby, Python or Java. The new project
must be written in Rust since someone upstairs has heard it is the best
language ever and has decided all new projects will be written in Rust
etc... Even if you are the one to make the choice of language you are often
forced into a particular direction because there are only 3 people in the
organization that know Smaltalk and there are 300 that are good with Java
and C#. Also making the wrong decision can be very expensive since it is
there is the ongoing maintenance cost and the cost of hiring people with
the appropriate skills but also the cost of interfacing with newer
technologies in the future a language like Perl that has seen its usage
shrink an awful lot over the years will be far less likely to have well
written and highly performant solutions for interfacing with the latest and
greatest technology simply because the number of companies needing this and
the number of individuals with the required knowledge and time to write
those solutions is far smaller making it less likely that this will be done
quickly.

As for practicing Perl, the biggest problem that I always found with Perl
when learning the language (long before github and such were around)  is
that it was hard to find projects that I could contribute to. But these
days with github and gitlab for instance it is so much easier to find an
interesting project to work on a good place to start would be:
https://github.com/topics/perl these are open source projects that will
welcome contributions, not all of them will have a issue tracker I guess
but those that do make it very easy to see what kinds of things actual
users are asking for or running into. I would suggest not looking at bugs
but at feature requests and seeing if there is anything that you feel makes
sense or could be fun, pay attention to what the core developers comment on
the request as they might have reasons for not wanting to implement it. But
usually it is just a lack of time that makes feature requests stay open.
Initially it will seem very daunting to just pick up some random project
and other people's code and it can be a challenge but also good fun.
You might feel that you are nowhere near good enough to work on this or
that just yet but with a little patience and a bit or a lot of effort you
will find that there is no problem that you cannot tackle. It is all about
you having the interest in and the time for completing the task you set
yourself. You might very well find that someone else has implemented a
solution for the same problem by the time you are done with it. That is
great as you can see how someone else solved the problem and maybe learn
from their efforts or find that your solution is better as theirs will not
work in this or that case for instance.
The main thing is to just get stuck in and to have fun doing it, if it is
not fun you picked the wrong project or the wrong task or maybe even the
wrong language to work in, some languages are far less fun to work in than
others. As a final hint, have a look at the documentation if there is any
that describes coding standards as there are enough people with strong
opinions about things like "unless" or bracket placement for instance that
even a great solution might end up getting rejected based on not meeting
the coding standards of the project you are working on.



On Mon, Jan 1, 2024 at 5:09 PM Mike  wrote:

>
> That is true.  I don't see the appeal with Python, but
> I have barely dabbled in it.  I code for me and I know
> Perl.  I don't have the motivation to learn a new language.
> Perl works well, so I use it.
>
>
> Mike
>
>
> On 12/25/23 22:05, William Torrez Corea wrote:
>
>
> I am a beginner, I am learning of the book Beginning Perl by Curtis "Ovid"
> Poe. I am learning subroutines.
>
> I want to develop a program or work in a project where the 

Re: Preference

2023-10-30 Thread Rob Coops
In this day and age Kotlin is probably your best bet but if you take on
Java you are 90% there with regards to Kotlin. As for Perl, unless you are
in a very specific niche Perl is pretty much dead as a commercially viable
option as there are too few people that are halfway decent at it so no
company would be silly enough to build a large project based on Perl these
days. It lacks a lot of the modern conveniences that Java for instance
offers. The chances of finding an up to date library for interaction with
any modern software package are virtually zero and a lot of people that are
not very good at perl have written a lot of very ugly code that is near
unreadable and certainly not maintainable giving the language a very poor
reputation in the overall software development community.

Now that is not to say Perl cannot still be a great language to learn and
have fun with. There are still, albeit niche, areas where you will find
perl used certainly in the EDI (Electronic Data Interchange space for
instance) Perl is still a commonly sought after skill. And as time
progresses people with these skills will get more and more rare allowing
those that possess this arcane knowledge to charge an arm and a leg for
their services much like Cobal developers and Mainframe specialists do now.
Keep in mind that Perl is written by a linguist and as such (especially for
English speakers) has a very natural syntax making it easy to learn. At the
same time and this is where the ugly code syndrom comes in, it is
extremely flexible with regards to syntax making it very easy to construct
pretty ugly code. Because just like in natural language there are many ways
to say the same thing and though they all are syntactically correct most
are ugly or worse quite confusing.

As with any language Perl can be great and quite nicely written but even in
its heyday those who wrote beautiful Perl code where rare, these days the
majority of people writing Perl for commercial purposes come from other
languages and are forced to write Perl out of necessity rather than desire.
So the quality of their code is often less than they would find acceptable
in their language of choice. So if you do go down the Perl route expect to
have to deal with ugly code an awful lot and unfortunately quite a few
"real" developers that look down on someone that still practices the dark
arts as they believe Perl to be slow, near impossible to maintain and not a
real programming language.

So my advice is start with Java if you are planning on working in the IT
world (way more job opportunities even for very inexperienced people). Do
Perl on the side and when you have proven yourself as a developer and like
to write in Perl you can always look for a job where you get to practice
the dark arts. Doing it the other way around you will find far less
opportunities, and often a demand for quite senior people as the existing
code is ancient there is no one else that can read it and it is used in a
quite critical operation.

On Sun, Oct 29, 2023 at 2:14 AM Claude Brown via beginners <
beginners@perl.org> wrote:

> I’d go with Java, in order:
>
>
>
>- Popularity – there is just more stuff being written in Java.  This
>would indicate the employment options are greater.
>   - https://www.tiobe.com/tiobe-index/
>   - Java currently rates 4th vs Perl at 27th
>
>
>
>- Over the long arc, Java probably has a faster learning curve.  As
>much as I love Perl, it has some rather arcane syntax at times.
>
>
>
> I’m sure you will get many opinions to the contrary on a Perl mailing list
> 
>
>
>
> FWIW, I’d wildly guess the following for my coding time:
>
>- 90% using Perl
>- 9% using PHP/JavaScript for web-pages
>- 0.9% in, say, C++ or C
>- 0.09% in Java or C#
>
>
>
> The rest of the time I slack off.
>
>
>
> Cheers,
>
>
> Claude.
>
>
>
>
>
>
>
> *From:* William Torrez Corea 
> *Sent:* Sunday, October 29, 2023 8:46 AM
> *To:* Perl Beginners 
> *Subject:* Preference
>
>
>
> *CAUTION:* This email originated from outside of the organization. Do not
> click links or open attachments unless you recognize the sender and know
> the content is safe.
>
> What is preferable to learn?
>
>
>
> Java or Perl
>
>
>
> What is the learning curve of this language of programming?
>
>
> --
>
>
> With kindest regards, William.
>
> ⢀⣴⠾⠻⢶⣦⠀
> ⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
> ⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
> ⠈⠳⣄
>
>


Re: Please help: perl run out of memory

2022-04-18 Thread Rob Coops
Hi Wilson,

Looking at the script I see some room for improvement. You currently
declare %hash as a global variable, and keep it around forever. With tens
of millions of rows that is quite a large structure to just have sitting
around after you have build the %stat hash. So I would start by limiting
the scope of the %hash from global to only till %stat has been filled. Of
course then there is the posibility of swapping the key and the value in
your stats hash, making the value an array of item ids or even a string of
item ids (might make sense if you do not expect to ever print all of them
in a list but only a top 1000 or so.

As for sorting a hugh hash in memory there comes a point where one has to
decide if Perl is the right tool for the job. It can probably do the job
but much like you can hammer a nail into a wall with a swiss armyknife Perl
is likely not the best tool for the job here. Looking around I see mostly
people advising to use a DB solution of some description be that Berkley
DB, SQLite or similar relatively simple database solutions you are likely
going to find this a much faster option than using pure Perl for sorting
such large hashes.
The reason why Spark is so much faster at this and more memory efficient at
the same time is that it has been designed to handle huge datasets like
this which very often need to be sorted or counted in some way. Though you
can definetely get the same output using Perl you will likely find that you
are looking at a nail while holding a swiss armyknife.

Just generally speaking having any variable sticking around for any length
of time after you are done with it is bad prectice in all languages. In an
interpeted language such as perl there is no way for an optimization step
to know you are never going to use this variable again so Perl is likely
going to hold on to the variable till the program exists where in a
compiled language an compiler might be able to see that this is the last
time this variable is used thus evict it from memory on your behalf but
that is only possible in some cases in many others such optimisations are
not going to kick in as the compiler cannot be certain that this function
that uses the variable is never going to be called again for instance. This
is why global variables in all languages are generally a bad idea but in
interpeted languages even more so.

Hope that helps a bit,

Rob

On Sun, Apr 17, 2022 at 8:00 PM David Mertens 
wrote:

> I see nothing glaringly inefficient in the Perl. This would be fine on
> your system if you were dealing with 1 million items, but you could easily
> be pushing up against your system's limits with the generic data structures
> that Perl uses, especially since Perl is probably using 64-bit floats and
> ints, and storing the hash keys twice (because you have to hashes).
>
> You could try to use the Perl Data Language, PDL, to create large typed
> arrays with minimal overhead. However, I think a more Perlish approach
> would be to use a single hash to store the data, as you do (or maybe using
> pack/unpack to store the data using 32-bit floats and integers). Then
> instead of using sort, run through the whole collection and build your own
> top-20 list (or 50 or whatever) by hand. This way the final process of
> picking out the top 20 doesn't allocate new storage for all 80 million
> items.
>
> Does that make sense? I could bang out some code illustrating what I mean
> if that would help.
>
> David
>
> On Sun, Apr 17, 2022, 5:33 AM wilson  wrote:
>
>> hello the experts,
>>
>> can you help check my script for how to optimize it?
>> currently it was going as "run out of memory".
>>
>> $ perl count.pl
>> Out of memory!
>> Killed
>>
>>
>> My script:
>> use strict;
>>
>> my %hash;
>> my %stat;
>>
>> # dataset: userId, itemId, rate, time
>> # AV056ETQ5RXLN,031887,1.0,1397692800
>>
>> open HD,"rate.csv" or die $!;
>> while() {
>>  my ($item,$rate) = (split /\,/)[1,2];
>>  $hash{$item}{total} += $rate;
>>  $hash{$item}{count} +=1;
>> }
>> close HD;
>>
>> for my $key (keys %hash) {
>>  $stat{$key} = $hash{$key}{total} / $hash{$key}{count};
>> }
>>
>> my $i = 0;
>> for (sort { $stat{$b} <=> $stat{$a}} keys %stat) {
>>  print "$_: $stat{$_}\n";
>>  last if $i == 99;
>>  $i ++;
>> }
>>
>> The purpose is to aggregate and average the itemId's scores, and print
>> the result after sorting.
>>
>> The dataset has 80+ million items:
>>
>> $ wc -l rate.csv
>> 82677131 rate.csv
>>
>> And my memory is somewhat limited:
>>
>> $ free -m
>>totalusedfree  shared  buff/cache
>> available
>> Mem:   1992 152  76   01763
>>1700
>> Swap:  1023 802 221
>>
>>
>>
>> What confused me is that Apache Spark can make this job done with this
>> limited memory. It got the statistics done within 2 minutes. But I want
>> to give perl a try since it's not that convenient to run a spark job
>> always.
>>
>> 

Re: question about perl script

2019-10-30 Thread Rob Coops
This should do:

#!/usr/bin/perl

use strict;
use warnings;

open my $a, '<:encoding(UTF-8)', 'a' or die "Unable to open a: $!";
open my $b, '<:encoding(UTF-8)', 'b' or die "Unable to open b: $!";

my %pair = ();

while ( my $line = <$a> ) {
  my @line = split(" ", $line);
  $pair{$line[0]} = 1;
}

while ( my $line = <$b> ) {
  my @line = split(" ", $line);
  if ( $pair{$line[0]} ) {
next;
  } else {
print $line
  }
}

close $a;
close $b;

A bit simplified of course using strict because as your code grows its the
only way to stay sane.
Now there are a few issues with this, the main one being that this is all
done in memory, so as your files grow you might run into trouble with
memory usage. So you might want to read chunks at a time rather than the
whole file. Also keep in mind that every action (like chomp) takes time
time that in this case is totally not needed, so stripping those pointless
steps out will help make things go faster which certainly as the file sizes
grow will make a difference.

Lastly I would suggest adding comments to the code so you can much more
easily hand this over to the next person that might want to understand what
you are doing or how you are doing that even though you are no longer there
to ask those questions (or after a few years you no longer remember)

Regards,

Rob

On Wed, Oct 30, 2019 at 7:04 AM Uri Guttman  wrote:

> On 10/29/19 10:48 PM, 刘东 wrote:
>
> Dear every one:
> I try to write a perl script to delet the content of file
> carp01_1_both.txt as same as from another file
> carp-carp01_TKD181002053-1_1_sg.txt, so to get a new file from file
> carp-carp01_TKD181002053-1_1_sg.txt but excluding file carp01_1_both.txt.
> However, when I run this scrip, it does not work, and display the
> information as follows:
> ...
> Semicolon seems to be missing at carp01_1_both.txt line 44993.
> Number found where operator expected at carp01_1_both.txt line 44994, near
> "55659 1"
> (Missing operator before  1?)
> Number found where operator expected at carp01_1_both.txt line 44994, near
> "ATCACG55"
> (Do you need to predeclare ATCACG?)
> Number found where operator expected at carp01_1_both.txt line 44994, near
> "55116"
> (Missing operator before 116?)
> syntax error at carp01_1_both.txt line 1, near "979:"
>
>
> it appears that perl is trying to compile one of your data files. show the
> command line where you run your script
>
>
> perl script:
> #!/usr/bin/perl -w
>
> better to use warnings than -w. also use strict is important
>
> open(NAME,"<$ARGV[0]")|| die;
> open(SECON,"<$ARGV[1]")|| die;
> open(SELEC,">$ARGV[2]")|| die;
>
> your die lines should say which file failed to open
>
> uri
>
>


Looking for a lightweight usable websocket server

2015-05-18 Thread Rob Coops
Hi all,

I'm working on a simple (or so I thought) project to build a websocket
server that broadcasts data the server receives via a telnet connection.

So a telnet connection is easily setup and messages are simple to relay to
any file handle.

But now comes the harder part.
A websocket server seems to generally be build to do one thing and one
thing only in perl and that is respond to incoming data. Running in an
infinite loop attaching event handlers to data that is received.
At least Net::Websocket::Server works that way.

Could any of you advice me a better library? I don't mind having to mess
about with Protocol::Websocket myself but if there is a good lightweight
library out there (not forcing me to install tons and tons of other
libraries would be good) I would much rather use that.

Thanks for any help pointer or tips.


Re: obfuscating code

2013-02-13 Thread Rob Coops
Are we really spiraling into a discussion on the merits of open versus
closed source?

Perl is a scripting language, it was create by a linguist which it is why
it allows for so many different ways to get the code to do the same thing.
It is like a language very flexible in many ways. Obfuscating code is not a
good idea for the nice market that Perl inhabits it is as simple as that.
If you want to do something like lets say quantum super positioning then
sure that could be done in Perl. But even the author of that module stated
that it was a pretty silly thing to do in Perl as it was simply not meant
to do such things.

Other scripting languages try (wrong or right that's up to you) to be
everything to all people and for large part are beginning to loose sight of
the scripting language that they are at their core. More and more these
languages are attempting to compete with the Java's of this world who in
its own right is trying to take the place of C/C++. Personally I am of the
opinion that most programming languages have a good reason for existing,
there is a problem domain in which they are simply the best tool in the
box. But the idea that a lot of modern languages seem to try is to be the
only tool in the box which in my opinion is totally wrong.

The reason why most Perl people will tell you obfuscation in Perl is simply
not right and should not be done is because the problem domain that Perl
covers is not on in which obfuscation is the way you should go.

If you are inventing something so extraordinary or exceptional that you
absolutely need to protect your code from prying eyes then you should
wonder if you should be using Perl for this. I know learning another
language is not the most fun or exciting bit of writing code but sometimes
that other language is simply much better suited to deal with the problem
you are trying to tackle.

Regardless of where you stand in the open source vs. closed source
discussion, I think we can all agree that code obfuscation is simply not
something that Perl the Perl authors or a rather substantial portion of the
Perl community support. Therefore you might want to try and build some tool
that does obfuscate your code for you if you really need it and cannot
change the language you are working in for what ever reason. But I
seriously doubt you will see a large portion of the Perl community
supporting your efforts. Not unlike the tools that turn Perl scripts into
binary files ready to be executed on a platform of your choice (during
compile time of course) an obfuscation tool will be used for sure, but
there is not a wide spread believe within the Perl community that this is
the right thing to do with Perl. The main argument being there are better
languages to do that in.


On Wed, Feb 13, 2013 at 8:49 PM, Bob McConnell r...@cbord.com wrote:

 People have been selling both Open Source and Free Software for years.
 Both IBM and RedHat are doing very well at it. But they don't always
 require cash or monetary profit as their selling price. You might also want
 to consider this article about the open source economic model.

 http://lxer.com/module/newswire/ext_link.php?rid=180777

 bm

  -Original Message-
  From: Octavian Rasnita [mailto:orasn...@gmail.com]
  Sent: Wednesday, February 13, 2013 12:53 PM
  To: Bob McConnell; Perl Beginners
  Subject: Re: obfuscating code
 
  From: Bob McConnell r...@cbord.com
 
   You cannot obfuscate the input to an interpreter. It has to be in a
 format
   that the interpreter will recognize, which necessarily means that
 people
   can
   also read it. If you really need to hide your source code, you have to
   switch to a compiled language with an actively optimizing compiler.
 
 
  I don't think that a Perl programmer can't hide his source code well
 enough,
  and if he wants to do that, he needs to switch to another language.
 
  If he created a Windows executable nice packaged in a setup.exe installer
  and wants to sell it for $10 - $20, then hiding the source code might
 help.
  If he just says that the users should pay $10 for using that program
  provided as source code, somebody who knows a little Perl could pay for
 it,
  then change the name of the program, eventually do some cosmetic changes
  in
  the source code, package it using ActiveState PDK and sell it as a new
  program that competes with the original one.
  Some may even like to do this to show that they are great programmers and
  that they created an application.
 
  If getting the source code is complicated enough, than those who may want
  to
  duplicate the program may get bored and abandon the idea.
 
  Apple, Microsoft, Oracle, IBM, SAP use to sell proprietary applications
 and
  their financial situation is not too bad. :-)
  I think that if providing everything as open source would have been such
 a
  good idea from the financial point of view, they would have provided all
  their applications as open source for a long time.
 
  Octavian


 --
 To 

Re: obfuscating code

2013-02-12 Thread Rob Coops
Hi Bob,

The problem with obfuscation is that if does not work. No mater how far you
go (all database tables are called Tnumber with every column being
Cnumber) all variables being single letter things like $a and @a and %b
one that wants to will always be able to read it. The only thing that you
are going to achieve is that the people that are unfortunate enough to have
to maintain you product will cures you, reverse engineer the code and write
a maintainable version to replace your code.

I have been supporting various systems for various companies in several
different branches of industry for over 13 years now. At all companies I
have seen the same thing, some perl script build by a perl guru (often
self proclaimed). That is near impossible to read and/or enormously big;
often very difficult to work with in terms of the environment requirements,
command line options that have to be in a certain order, undocumented
command line parameters that might or might not do something. Every single
time I ended up having to replace that code with something much simpler
cleaner and easier maintain because the old code was simply not able to
grow with the companies needs and the previous perl guy had left a long
time ago.

Obfuscation is not security, it is just a way to get remembered as the guy
that left the code that no one could maintain.

Even switching to a different language will not help, after all code that
gets executed can be reverse engineered. When you slap enough copyrights
and other wonderful protection on your code then the mere input and
resulting output can be emulated in any language of your choice. In the end
the beautiful thing of computers is that given a certain program and a
certain input the result of that program combined with that input will
always be the same, which means there is no way to hide what you are
doing just to make it harder to figure out which as I explained above is
never a good idea and your code will normally out live you by many years.
And the next person is not going to be singing your praises for making
their live miserable thanks to code that no one can figure out.

Of course other languages that are compiled for instance make it a little
harder to read the code because of this compilation step, but this does not
stop people from reading the code it just means they have to take an extra
step. Which they will if needed simply because copying a working solution
is always better then inventing the wheel all over again.

The only semi obfuscation that works a little bit is today's buzzword
'cloud computing' it allows you to keep your code in your own hands away
from the customer thus better hidden then ever before. Of course this does
not mean that you or me could not for instance build our own Facebook or
Google. It will just be a whole lot harder to do as we are missing one
important bit of information: the input, at least the bit of input that is
running on their servers...
But all in all when you end up working for Google or for Facebook you can
be certain that the code is not obfuscated as it would not be feasible to
maintain a code base of that size with so many people if it where
obfuscated.

Or as the perl help says, Obfuscation only works if you simply delete the
code.
All other solutions will simply frustrate those with good intentions and
just make it more interesting for those with bad intentions.

On Tue, Feb 12, 2013 at 7:39 PM, jbiskofski jbiskof...@gmail.com wrote:

 I understand that obfuscating code is not a real detriment to a seriously
 motivated knowledgeable hacker. Yet I still think some security is
 preferable to no security at all. Also I wish this problem could be
 attacked somehow other than suggesting to switch to a different language.


 On Tue, Feb 12, 2013 at 12:32 PM, Bob McConnell r...@cbord.com wrote:

  You cannot obfuscate the input to an interpreter. It has to be in a
 format
  that the interpreter will recognize, which necessarily means that people
  can also read it. If you really need to hide your source code, you have
 to
  switch to a compiled language with an actively optimizing compiler. Then
  only distribute the output from the compiler. Even then there may be
  de-compilers or disassemblers that can reconstruct much of your source in
  readable form.
 
  Bob McConnell
 
   -Original Message-
   From: jbiskofski [mailto:jbiskof...@gmail.com]
   Sent: Tuesday, February 12, 2013 1:30 PM
   To: timothy adigun
   Cc: John SJ Anderson; Perl Beginners
   Subject: Re: obfuscating code
  
   I see everyone is eager to judge this as a terrible idea, its the exact
   same response Ive gotten to this question on mailing lists on IRC.
  
   HOWEVER, I think this can be a valid concern. We are always talking
 about
   how the best way to shine good light on Perl is writing cool stuff in
 it.
  
   Well Ive actually gone out a built a company that does a HUGE LMS in
  Perl,
   its used by over 300K students in Mexico ( 

Re: Date::Manip question

2012-11-08 Thread Rob Coops
On Thu, Nov 8, 2012 at 11:56 AM, Marco van Kammen mvankam...@mirabeau.nlwrote:

  Hi List,

 ** **

 For a logrotation and cleanup script I want to fill the following
 variables.

 ** **

 my $current_month =(should be Nov)

 my $current_mont_num =   (should be 11)

 my $previous_month = (should be Oct)

 my $previous_month_num = (should be 10)

 ** **

 I’ve been looking at the module Date::Manip to get this going, but I can’t
 seem to get it working.

 ** **

 Any help in the right direction would be appreciated.

 ** **

 With Kind Regards,

 ** **

 ** **
*Marco van Kammen*  Applicatiebeheerder   *Mirabeau |
 Managed Services*Dr. C.J.K. van Aalstweg 8F 301, 1625 NV Hoorn
 +31(0)20-5950550  -  www.mirabeau.nl   [image: Mirabeau]Please
 consider the environment before printing this email


Hi Marco,

For this I would nto bother with Date::Manip but just use time.

Something like the below would do perfectly fine...

my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);

my @months = ( 'JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG',
'SEP', 'OCT', 'NOV', 'DEC' );

my $current_month = $months[$mon];
my $current_mont_num = $mon;
my $previous_month = $months[$mon -1];
my $previous_month_num = $mon -1;

print
$current_month\n$current_mont_num\n$previous_month\n$previous_month_num\n;

Easy enough and without a module needed a lot more portable without a lot
of work.

Regards,

Rob Coops
miralogo69b6.pngblank4113.gifleaf1fbe.gifblank20ad.gifblank1c63.gifblank52fd.gif

Re: Date::Manip question

2012-11-08 Thread Rob Coops
On Thu, Nov 8, 2012 at 12:53 PM, Rob Coops rco...@gmail.com wrote:


 On Thu, Nov 8, 2012 at 11:56 AM, Marco van Kammen 
 mvankam...@mirabeau.nlwrote:

  Hi List,

 ** **

 For a logrotation and cleanup script I want to fill the following
 variables.

 ** **

 my $current_month =(should be Nov)

 my $current_mont_num =   (should be 11)

 my $previous_month = (should be Oct)

 my $previous_month_num = (should be 10)

 ** **

 I’ve been looking at the module Date::Manip to get this going, but I
 can’t seem to get it working.

 ** **

 Any help in the right direction would be appreciated.

 ** **

 With Kind Regards,

 ** **

 ** **
*Marco van Kammen*  Applicatiebeheerder   *Mirabeau |
 Managed Services*Dr. C.J.K. van Aalstweg 8F 301, 1625 NV Hoorn
 +31(0)20-5950550  -  www.mirabeau.nl   [image: Mirabeau]Please
 consider the environment before printing this email


 Hi Marco,

 For this I would nto bother with Date::Manip but just use time.

 Something like the below would do perfectly fine...

 my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);

 my @months = ( 'JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG',
 'SEP', 'OCT', 'NOV', 'DEC' );

 my $current_month = $months[$mon];
 my $current_mont_num = $mon;
 my $previous_month = $months[$mon -1];
 my $previous_month_num = $mon -1;

 print
 $current_month\n$current_mont_num\n$previous_month\n$previous_month_num\n;

 Easy enough and without a module needed a lot more portable without a lot
 of work.

 Regards,

 Rob Coops


Yeah a minor update... $mon starts counting at 0 so for the current month
number add 1 and for the previous month number do not substract 1. :-)

my $current_month = $months[$mon];
my $current_mont_num = $mon + 1;
my $previous_month = $months[$mon -1];
my $previous_month_num = $mon;
blank20ad.gifblank1c63.gifmiralogo69b6.pngblank52fd.gifleaf1fbe.gifblank4113.gif

Re: Fast XML parser?

2012-10-31 Thread Rob Coops
On Wed, Oct 31, 2012 at 5:39 PM, Jenda Krynicky je...@krynicky.cz wrote:

 From: Octavian Rasnita orasn...@gmail.com
  I forgot to say that the script I previously sent to the list also
 crashed Perl and it popped an error window with:
 
  perl.exe - Application Error
  The instruction at 0x7c910f20 referenced memory at 0x0004. The
 memory could not be read.  Click on OK to terminate the program
 
  I have created a smaller XML file with only ~ 100 lines and I ran agan
 that script, and it worked fine.
 
  But it doesn't work with the entire xml file which has more than 200 MB,
 because it crashes Perl and I don't know why.
 
  And strange, but I've seen that now it just crashes Perl, but it doesn't
 return that Free to wrong pool error.
 
  Octavian

 That must be something either within your perl or the
 XML::Parser::Expat. What versions of those two do you have? Any
 chance you could update?


 Jenda
 = je...@krynicky.cz === http://Jenda.Krynicky.cz =
 When it comes to wine, women and song, wizards are allowed
 to get drunk and croon as much as they like.
 -- Terry Pratchett in Sourcery


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



The memory issue is really an issue of the module it self I have had those
problems as well, the more complex the xml structure the more memory it
takes up and the faster you will run out. I simply moved on to other
modules as I could not afford to spend my time on trying to figure out a
workaround.

Regards,

Rob Coops


Re: Fast XML parser?

2012-10-29 Thread Rob Coops
On Mon, Oct 29, 2012 at 9:18 AM, Shlomi Fish shlo...@shlomifish.org wrote:

 On Mon, 29 Oct 2012 10:09:53 +0200
 Shlomi Fish shlo...@shlomifish.org wrote:

  Hi Octavian,
 
  On Sun, 28 Oct 2012 17:45:15 +0200
  Octavian Rasnita orasn...@gmail.com wrote:
 
   From: Shlomi Fish shlo...@shlomifish.org
  
   Hi Octavian,
  
  
  
   Hi Shlomi,
  
   I tried to use XML::LibXML::Reader which uses the pool parser, and I
 read
   that:
  
   
   However, it is also possible to mix Reader with DOM. At every point the
   user may copy the current node (optionally expanded into a complete
   sub-tree) from the processed document to another DOM tree, or to
   instruct the Reader to collect sub-document in form of a DOM tree
   
  
   So I tried:
  
   use XML::LibXML::Reader;
  
   my $xml = 'path/to/xml/file.xml';
  
   my $reader = XML::LibXML::Reader-new( location = $xml ) or die
 cannot
   read $xml;
  
   while ( $reader-nextElement( 'Lexem' ) ) {
   my $id = $reader-getAttribute( 'id' ); #works fine
  
   my $doc = $reader-document;
  
   my $timestamp = $doc-getElementsByTagName( 'Timestamp' ); #Doesn't
   work well
   my @lexem_text = $doc-getElementsByTagName( 'Form' ); #Doesn't
 work
   fine
  
   }
  
 
  I'm not sure you should do -document. I cannot tell you off-hand how to
 do it
  right, but I can try to investigate when I have some spare cycles.
 

 OK, after a short amount of investigation, I found that this program works:

 [CODE]

 use strict;
 use warnings;

 use XML::LibXML::Reader;

 my $xml = 'Lexems.xml';

 my $reader = XML::LibXML::Reader-new( location = $xml ) or die cannot
 read
 $xml;

 while ( $reader-nextElement( 'Lexem' ) ) {
 my $id = $reader-getAttribute( 'id' ); #works fine

 my $doc = $reader-copyCurrentNode(1);
 my $timestamp = $doc-getElementsByTagName( 'Timestamp' );
 my @lexem_text = $doc-getElementsByTagName( 'Form' );
 }

 [/CODE]

 Note that you can also use XPath for looking up XML information.

 Regards,

 Shlomi Fish


 --
 -
 Shlomi Fish   http://www.shlomifish.org/
 List of Text Processing Tools - http://shlom.in/text-proc

 Sophie: Let’s suppose you have a table with 2^n cups…
 Jack: Wait a second! Is ‘n’ a natural number?

 Please reply to list if it's a mailing list post - http://shlom.in/reply .

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



A little late I know but still...

Last year I was asked to process a large amount of XML files 2x 1.6M files
that needed to be compared on a element by element level and with some
fuzzy logic needed to be the same. Things like floating point precision
could change (1.00 = 1) and in some cases data could show up in a different
order (repeating elements for multiple items on an order). The whole idea
was system A that took flat text output from a mainframe and translated
this to XML for consumption by a web service was being replaced by system B
that did the same thing but on a entirely different software stack.

Of course this needed to go as fast as possible as we simply could not sit
around for a few days while the computer did it's thing. LibXML was my
saviour and using XPath was the fastest solution. Though it is possible to
do the DOM thing you end up with the DOM being translated to XPath under
the hood (at least the performance seemed to indicate that). After a lot of
testing and using pretty much any XML parser I could find using LibXML and
XPath was really the fastest.
If you are going for speed then you will want to avoid any copy operations
you can and you will want to as much as possible use references. Because
even though a memory copy of some 100 bytes is a very fast operation on a
few million files the the little time it takes kind of adds up to a lot
longer then you would like it to.

When you are looking at speed first and foremost try and avoid anything
that would slow you down. A copy of information is slow so don't do it if
you can avoid it. A reference to a memory location is slightly harder to
work with in programming but a lot faster. A translation from DOM to XPath
would take you time to do, the computer needs the same time. If it is pure
speed you are after avoid this as well.
If you are sure you are as fast as you can be add a benchmark to the code
and try individual optimisations that might or might not be faster... you
would be surprised how the perl internals sometimes are a lot faster with
some operations then with others even though feeling wise you would not
have expected this to be the case.

For my case as it was a once in every 25 years kind of major change I
didn't do to much benchmarking as the code would be discarded at the end of
the project. (well stored in a dusty old SVN repository for others to reuse
and never to be looked at again realistically) I got it to go fast enough
for a 

Re: Using alarm

2012-10-26 Thread Rob Coops
On Fri, Oct 26, 2012 at 4:46 PM, Unknown User knowsuperunkn...@gmail.comwrote:

 I have code that goes something like this:


 my $start = time();
 my $delay = 60;
 ...

 while (my $line = $fh) {
 my $err;
 ...
 ...
 my $timenow = time();
 if ( $timenow - $start = $delay )  {
 $start = $t;
 dumpstats($err);
 $err = {};
 }
 }
 ...

 I wonder if it would be possible to replace this loop with a handler
 based on alarm.
 If it is possible, which would be more efficient?

 Thanks

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Possible most certainly, desirable maybe...

Here is the main thing your loop can run for an x number of seconds 60 at a
minimum or 120 maybe even 120 as your code might not get to the end of
the loop before then. Using alarm or any type of module based on this
function will ensure that you will perform the same action a lot closer to
your desired 60 second mark. Using Time::HiRes will allow you to specify
less then seconds which might be interesting in some cases (then you will
use the ualarm call instead of allarm)

Hope that helps a little,

Rob


Re: My script is...

2012-09-14 Thread Rob Coops
On Fri, Sep 14, 2012 at 2:52 PM, jmrhide-p...@yahoo.com wrote:

 I appreciate the helpful input.

 It appears from my server stats that the script in question has only run a
 couple of times a day on average, so that's fewer than a thousand
 instances in
 the past year. I ran LOTS of tests of the script, none of which hung or
 produced
 unexpected output.

 If the output was as expected, then the script had to execute correctly to
 the
 end of the output stage at least. But the only thing after that is a brief
 sub
 that advises the user that they must enable cookies in order to use the
 script.
 If that code were executing, then the output I see when I run the script
 would
 not be as expected.

 The While(1) loops all occur PRIOR to the output stage. All they do is
 generate
 a random integer, make sure it's not the same as the last one, and move on.

 So I still don't get it!

  John M Rathbun MD


Hi John,

Basically every one on this list has said the same. The script is a mess it
is hard to read hard to maintain and should be cleaned up before we can
really comment on this.

To not be to rude to you script it looks like it was run over by a
truck, repeatedly. You might have applied a good dose of ducktape and
staples but it is simply not really maintainable anymore. It actually is
bad enough for everyone to say please clean it up before asking for help as
we just can't read it. And these are for a large part perl professionals
writting code on a daily basis...

On top of that it could very well be that you script in your development
environment is running on a different version of Apache or Perl also it
could very well be that the modules installed in the production environment
Apache (mod_perl maybe?) are the cause of all these troubles.

In any case I would advise the same as every one else did before, clean up
the script first so it is a lot simpler to read.
After that figure out what modules you production Apache is running with
and identify their potential impact on your script by running these your
self or reading the documentation of course (though you will always have to
prove what you read in the documentation in the real world so a lot of us
are at least tempted to skip the boring reading bit). If none of that helps
(I am reasonably sure it will) have a look at the Perl version your
provider is using and make sure that you are using the same version as well
there might be a difference there.

Regards,

Rob Coops


Re: hash of arrays sorting

2012-08-23 Thread Rob Coops
On Thu, Aug 23, 2012 at 9:10 AM, Uri Guttman u...@stemsystems.com wrote:

 On 08/23/2012 02:54 AM, Salvador Fandino wrote:


 It's a pity Sort::Maker not in Debian


 There is also Sort::Key, available in Debian testing and unstable, and
 which is usually faster than Sort::Maker and also Sort::Key::Radix, even
 faster when sorting by numeric keys but not available in Debian.

use Sort::Key qw(ukeysort);

my @sorted = ukeysort { /^(\d+)-(\d+)/
  or die bad key $_;
$1 * 100 + $2 } @data;


 The 'u' prefix in 'ukeysort' specifies that the sorting key is an
 unsigned integer.


 you are comparing apples and oranges. your module provides numerous
 specialized sort subs (like php does) which means the coder has to remember
 or lookup what each one does. i counted at least 32 similarly names subs
 which is very confusing. having a single sub with a flexible api is a much
 better interface.

 your claim for it being faster needs backing with benchmarks with multiple
 keys. single key sorts are fast with just the sort function as is. since
 your multikey sorts must use a map/sort/map method, again showing how they
 are faster then the GRT would be interesting.

 uri




 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



As said before is all needs benchmarking to see what is really faster and
how much faster actually (a 1% increase in time but a much easier interface
or far less dependencies could be preferable in some cases). If I had to
deal with this I would probably use the map, sort, map solution proposed
before as this solution is available on all platforms and therefore highly
portable (in the environment I work in there is a range of perl versions
from 5.005 to the latest release on a wide range of operating systems)

As for faster or slower, have a look at the size of the hash you are
sorting, if the hash only has a few hundred keys the speed difference might
not even be noticeable (a few ms more or less in a long running process is
not going to be noticed). On the other hand is there is tens of thousands
of keys and the sorting time is reduced an awful lot with this sort being
the main time consumer within the overall process... It might be worth
installing the module even if it is a slightly painful thing in some
environments.

Regards,

Rob


Re: How to format the data ??

2012-08-22 Thread Rob Coops
On Wed, Aug 22, 2012 at 4:39 PM, jet speed speedj...@googlemail.com wrote:

 Hi All,

 Please advice me on now to capture the data below in the format as below.

 i thought of using hash, but then the problem is each DisplayDevNum has
 multiple WWN.  some has 4 elements, some has 2. Any other method ?
 Apprecaite your comments.


 i want to caputre in the below format.
 DisplayDevNum and its corresponding WWN=10:00:00:00:c9:c9:xx:xx and
 nickname= x
 DisplayDevNum and its corresponding WWN=10:00:00:00:c9:c9:xx:xx and
 nickname= x

 

 DisplayDevNum=1D:D4
 domainID=7
 devNum=8,888
 List of 4 WWN elements
 WWN=10:00:00:00:c9:c9:89:8c
 nickname=res-abc
 WWN=10:00:00:00:c9:c9:89:8b
 nickname=res-abd
 WWN=10:00:00:00:c9:c9:89:8a
 nickname=res-a33
 WWN=10:00:00:00:c9:c9:89:8i
 nickname=res-34
 DisplayDevNum=1D:D9
 domainID=7
 devNum=8,888
 List of 2 WWN elements
 WWN=10:00:00:00:c8:c9:89:8f
 nickname=res-a1
 WWN=10:00:00:00:c9:c9:89:81
 nickname=res-a33

 Sj


Hi Sj,

The hash idea is a good one but it is going to be a little bit more complex
then a simple hash. You will first need to record the DisplayDevNum (and
remember it, then find all the underlying information and associate it with
the DisplayDevNum.

The following example does just that:

use strict;
use warnings;
use Switch;

use Data::Dumper;

my $DisplayDevNum_tracker;
my $WWN_tracker;

my %complex_hash;

open my $fh, , text.txt or die $!;
while ( $fh ) {
 chomp;
 my $line = $_;

 my ( $key, $value ) = split/=/,$line;

 switch ( $key ) {
  case 'DisplayDevNum' { $DisplayDevNum_tracker = $value; $complex_hash{
$value } = {}; }
  case 'domainID'  { $complex_hash{ $DisplayDevNum_tracker
}{'domainID'} = $value; }
  case 'devNum'{ $complex_hash{ $DisplayDevNum_tracker }{'devNum'}
= $value; }
  case 'WWN'   { $WWN_tracker = $value; $complex_hash{
$DisplayDevNum_tracker }{ $value } = {}; }
  case 'nickname'  { $complex_hash{ $DisplayDevNum_tracker }{
$WWN_tracker } = $value; }
 }
}

print Dumper %complex_hash;

In short open the file loop over it one line at a time, remove the
linefeed. Then split on the = sign automatically dropping lines you do not
care about. Then using a switch statement figure out what type of line you
are dealing with.
If it is a DisplayDevNum line remember the value. if the key is anything
else use the remembered DisplayDevNum to associate the new data with it. If
the WWN line is found remember the value and once you get to the nickname
associate this with the remembered WWN value.

Printing it all out you get a hash that looks like this:
$VAR1 = '1D:D9';
$VAR2 = {
  'devNum' = '8,888',
  'domainID' = '7',
  '10:00:00:00:c9:c9:89:81' = 'res-a33',
  '10:00:00:00:c8:c9:89:8f' = 'res-a1'
};
$VAR3 = '1D:D4';
$VAR4 = {
  '10:00:00:00:c9:c9:89:8a' = 'res-a33',
  'devNum' = '8,888',
  'domainID' = '7',
  '10:00:00:00:c9:c9:89:8b' = 'res-abd',
  '10:00:00:00:c9:c9:89:8c' = 'res-abc',
  '10:00:00:00:c9:c9:89:8i' = 'res-34'
};

Which is I believe what you are looking for right?

Regards,

Rob


Re: search and replace

2012-08-16 Thread Rob Coops
On Thu, Aug 16, 2012 at 10:55 AM, Gergely Buday gbu...@gmail.com wrote:

 Here is the correct version:

 #!/usr/bin/perl

 $csproj_text = C:\\build.txt;

 $csproj_text =~ s/\\//g;
 print $csproj_text\n;

 Notice that in order to put a literal backslash into a perl string,
 you should escape it. In your original program, you have put a \b, a
 bell character into the string.

 - Gergely

 On 16 August 2012 10:48, Irfan Sayed irfan_sayed2...@yahoo.com wrote:
  hi,
 
  i have following code to search single \ and replace it with \\
  but it is not doing as expected:
 
  $csproj_text = C:\build.txt;
 
  $csproj_text =~ s/\\//g;
  print $csproj_text\n;
 
  the output is : Cuild.txt
  instead the output should be : C:\\build.txt
  can someone please suggest, what is the wrong ?
 
  regards
  irfan

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



That or replace the  with ' and all wil be fine.

$csproj_text = 'C:\build.txt';

$csproj_text =~ s/\\//g;
print $csproj_text\n;

Regards,

Rob


Re: case statement in perl

2012-08-01 Thread Rob Coops
On Wed, Aug 1, 2012 at 9:08 AM, Paul.G medur...@yahoo.com.au wrote:

 The example below is just a test, I need to be able to insert multiple
 values into a command, those values can be either 1, 2 or upto 5.

 Below is closer to the working example, but I will read that document and
 to help make a final decision.

 # Check Free PV's
 operation_CHECKFREEPVS();

 $NEWMIRROR = $MIRROR + $numPV;

 if ($numPV ne 0  $MIRROR le 5) {
# lvextend
 print $numPV $NEWMIRROR $MIRROR\n;
 switch ($numPV) {
 case 1 { run(/usr/sbin/lvextend -m $NEWMIRROR -s $sourcelv
 $FreePV[0]); }
 case 2 { run(/usr/sbin/lvextend -m $NEWMIRROR -s $sourcelv
 $FreePV[0] $FreePV[1]); }
 case 3 { run(/usr/sbin/lvextend -m $NEWMIRROR -s $sourcelv
 $FreePV[0] $FreePV[1] $FreePV[2]); }
 case 4 { run(/usr/sbin/lvextend -m $NEWMIRROR -s $sourcelv
 $FreePV[0] $FreePV[1] $FreePV[2] $FreePV[3]); }
 case 5 { run(/usr/sbin/lvextend -m $NEWMIRROR -s $sourcelv
 $FreePV[0] $FreePV[1] $FreePV[2] $FreePV[3] $FreePV[4]);
  }
}
# lvsync
run(/usr/sbin/lvsync -T $sourcelv);
logprint Successful $NEWMIRROR mirrors \t\t synced;
 }
 else {
cleanexit (10, FAIL \t\t No Free PV's Available);
 }

 return 0;
 }



 
  From: John W. Krahn jwkr...@shaw.ca
 To: Perl Beginners beginners@perl.org
 Sent: Wednesday, 1 August 2012 4:58 PM
 Subject: Re: case statement in perl

 Paul.G wrote:
  Below is an extract from the perl script, the switch/case statement
 seemed like a simple solution.
 
 
 
  # Mail Program #
 
  operation_CHECKFREEPVS();
  print $numPV \n;
  # print $FreePV[1] $FreePV[0] $numPV\n;
  if ($numPV ne 0 ) {
  switch ($numPV) {
 case 1 { print $FreePV[0] \n; }
 case 2 { print $FreePV[0] $FreePV[1] \n; }
 case 3 { print $FreePV[0] $FreePV[1] $FreePV[2] \n; }
  }
}

 Couldn't you just do that like this:

 if ( @FreePV  @FreePV = 3 ) {
  print join( ' ', @FreePV ), \n;
  }


else {
  print No PV's available \n;
}



 John
 --
 Any intelligent fool can make things bigger and
 more complex... It takes a touch of genius -
 and a lot of courage to move in the opposite
 direction.   -- Albert Einstein

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/


Why make life so hard as John suggested you check for the number of
arguments and as long as it is less then 5 you simply join them with a
space and you are done with a single line...

No matter what you do you will always have to check the variables that you
are going to push out, right now I could very well feed a parameter that
ends in a ; and then do something like *rm -rf /* which could make a
serious mess of your system...
Personally I would suggest checking the command flags if they are valid,
and do not contain any funny characters before even considdering putting
this in a system call or equivalent.

Regards,

Rob


Re: read file attributes

2012-07-30 Thread Rob Coops
On Mon, Jul 30, 2012 at 1:24 PM, Irfan Sayed irfan_sayed2...@yahoo.comwrote:

 hi,

 i need to access the attributes of file.
 more precisely, i need to check if the file is digitally signed or not


 for example; if i right click on file, then i need to check if the
 digital signature tab is there or not for that specific file and valid
 certificate is there.

 please suggest if any perl module is available to get this info


 regards,
 irfan


Hi Ifran,

It would help if you indicate the platform you are working on :-) Now I
assume you are working on windows and in that case I would guess (never
needed this myself so I can't say for sure) that
*Win32API::Filehttp://search.cpan.org/~chorny/Win32API-File-0.1200/File.pm
 *should do the trick. After all is offers a direct interface to the
windows file API and therefore should be able to provide you what you are
looking for.

Regards,

Rob


Re: Multiprocessing script

2012-07-26 Thread Rob Coops
On Thu, Jul 26, 2012 at 11:01 AM, Shlomi Fish shlo...@shlomifish.orgwrote:

 Hi Punit,

 a few comments on your code.

 On Thu, 26 Jul 2012 10:17:13 +0530
 punit jain contactpunitj...@gmail.com wrote:

  Hi,
 
  Below is my script where alarm is generated after 15 sec and changes the
  global variable in parent from 0 to 1. Is there any mistake in my
  multiprocess approach with time-based stop ?
 
  #!/usr/bin/perl
  use strict;
  use warnings;
  use Parallel::ForkManager;

 It's good that you're using strict and warnings, but you lack more
 empty
 lines between paragraphs in your code.

 
  my $MAX_PROC=2;
  $|++;
  my $stopFlag=0;

 You should use IO::Handle-autoflush(1) instead of $|++ for clarity.

 
  main();
 
  sub main {
 
  preprocess(@ARGV);
  multiproc($MAX_PROC, @ARGV);
 
  }

 Your indentation is bad. Where is preprocess() defined? Moreover, I should
 note
 that passing @ARGV directly to subroutine smells of lack of separation of
 concerns.

  sub multiproc {
 
  my $proc=shift;

 Your indentation is bad and you need spaces surrounding the =.

  my $argu = $ARGV[0];

 Don't use $ARGV[0] directly:

 http://perl-begin.org/tutorials/bad-elements/#subroutine-arguments

 
  open(USRFILE, $argu) or die cant open $!;

 1. Don't use bareword file handles.

 2. Use three-args opens.

 3. Don't surround strings with quotations.

 See:

 * http://perl-begin.org/tutorials/bad-elements/#vars_in_quotes

 * http://perl-begin.org/tutorials/bad-elements/#open-function-style

 You also should use a more descriptive and cohernet name than USRFILE and
 also
 see:

 http://perl-begin.org/tutorials/bad-elements/#calling-variables-file

  my $pmgr  = Parallel::ForkManager-new($
  proc);
 
  $pmgr-run_on_finish(
 sub { my ($pid, $exit_code, $ident) = @_;

 Don't put the argument unpacking on the same line as the {.

 print ** $ident just got out of the pool ** with PID $pid and
  parent pid as $$ exit code: $exit_code\n;
 }
   );
 
 $pmgr- run_on_start(
 sub { my ($pid,$ident)=@_;

 Again, and you have a space after the -.

   print ** $ident started, pid: $pid **\n;
 }
   );
 
  $SIG{ALRM} = sub { $stopFlag = 1; };
  alarm(16);
 
  while(my $user = USRFILE) {
  chomp($user);
  print value of Stop Flag in parent $0 is $stopFlag\n;
  if($stopFlag == 1) {
  print stop flag has been set\n;
  last;
  }

 Just use if ($stopFlag) here.

  my $id = $pmgr-start($user) and next;
  print value of Stop Flag in child $id is $stopFlag\n;
  sleep(7);
 
  $pmgr-finish($user);
}
  print Waiting for all children to exit\n;

 Again - bad indentation.

 Regards,

 Shlomi Fish

  $pmgr-wait_all_children();
  alarm(0);
  print All children completed\n;
  }
 
 
  This is what I get output : -
 
  perl mprocess.pl /tmp/list1
  value of Stop Flag in parent mprocess.pl is 0
  value of Stop Flag in child 0 is 0
  ** te...@test.com started, pid: 21777 **
  value of Stop Flag in parent mprocess.pl is 0
  value of Stop Flag in child 0 is 0
  ** te...@test.com started, pid: 21778 **
  value of Stop Flag in parent mprocess.pl is 0
  ** te...@test.com just got out of the pool ** with PID 21777 and parent
 pid
  as 21776 exit code: 0
  ** te...@test.com just got out of the pool ** with PID 21778 and parent
 pid
  as 21776 exit code: 0
  value of Stop Flag in child 0 is 0
  ** te...@test.com started, pid: 21811 **
  value of Stop Flag in parent mprocess.pl is 0
  value of Stop Flag in child 0 is 0
  ** te...@test.com started, pid: 21812 **
  value of Stop Flag in parent mprocess.pl is 0
  ** te...@test.com just got out of the pool ** with PID 21811 and parent
 pid
  as 21776 exit code: 0
  ** te...@test.com just got out of the pool ** with PID 21812 and parent
 pid
  as 21776 exit code: 0
  value of Stop Flag in child 0 is 0
  ** te...@test.com started, pid: 21832 **
  value of Stop Flag in parent mprocess.pl is 0
  value of Stop Flag in child 0 is 0
  ** te...@test.com started, pid: 21833 **
  value of Stop Flag in parent mprocess.pl is 0
  ** te...@test.com just got out of the pool ** with PID 21832 and parent
 pid
  as 21776 exit code: 0
  ** te...@test.com just got out of the pool ** with PID 21833 and parent
 pid
  as 21776 exit code: 0
  value of Stop Flag in child 0 is 1
  ** te...@test.com started, pid: 22030 **
  value of Stop Flag in parent mprocess.pl is 1
  stop flag has been set
  Waiting for all children to exit
  ** te...@test.com just got out of the pool ** with PID 22030 and parent
 pid
  as 21776 exit code: 0
  All children completed
 
  The concern here is I see value of stopflag as 1 for child before the
  parent which is a bit wierd:-
 
  value of Stop Flag in child 0 is 1
  ** te...@test.com started, pid: 22030 **
  value of Stop Flag in parent mprocess.pl is 1
 
  Thanks and Regards.



 --
 

Re: How to zip all files which are value from awk command ?

2012-07-26 Thread Rob Coops
On Thu, Jul 26, 2012 at 12:05 PM, Jack Vo jacksvo2...@gmail.com wrote:

 Hi all,

 I need to compress many files in a directory on server. I use awk and
 zip command to compress these files.

 By awk command, I filter theses file :

 *# ls -latr | grep iMAP.med0 | awk '{print $9}'*
 iMAP.med0101_agent.trace.**20120726153046.tar.gz
 iMAP.med0101_agent.trace.**20120726152942.tar.gz
 iMAP.med0107_agent.trace.**20120726154526.tar.gz
 iMAP.med0101_agent.trace.**20120726154741.tar.gz
 iMAP.med0101_agent.trace.**20120726154616.tar.gz
 iMAP.med0101_agent.trace.**20120726154436.tar.gz
 iMAP.med0105_agent.trace.**20120726154555.tar.gz
 iMAP.med0101_agent.trace.**20120726154532.tar.gz
 iMAP.med0101_agent.trace.**20120726154700.tar.gz
 iMAP.med0101_agent.trace.**20120726154720.tar.gz

 I want to compress them to trace_file.zip, and I use the command, but can
 not zip these files. Which parameters or syntax did I wrong ?

 *# ls -latr | grep iMAP.med0 | awk '{ system(zip /tmp/trace_file $9)}'*
 # ls -latr /tmp/ | grep trace_file
 #


 --
 Thank and best regards,
 Jack Vo

 Hi Jack,

Sorry but how is this related to perl? This is a linux/unix question that
most on this list might be able to answer but that does not mean it belongs
on this list.

Rob.


Re: Sluggish code

2012-06-11 Thread Rob Coops
On Mon, Jun 11, 2012 at 4:31 PM, venkates venka...@nt.ntnu.no wrote:

 Hi all,

 I am trying to filter files from a directory (code provided below)  by
 comparing the contents of each file with a hash ref (a parsed id map file
 provided as an argument). The code is working however, is extremely slow.
  The .csv files (81 files) that I am reading are not very large (largest
 file is 183,258 bytes).  I would appreciate if you could suggest
 improvements to the code.

 sub filter {
my ( $pazar_dir_path, $up_map, $output ) = @_;
croak Not enough arguments!  if ( @_  3 );

my $accepted = 0;
my $rejected = 0;

opendir DH, $pazar_dir_path or croak (Error in opening directory
 '$pazar_dir_path': $!);
open my $OUT, '', $output or croak (Cannot open file for writing
 '$output': $!);
while ( my @data_files = grep(/\.csv$/,readdir(DH)) ) {
my @records;
foreach my $file ( @data_files ) {
open my $FH, '', $pazar_dir_path/$file or croak (Cannot
 open file '$file': $!);
while ( my $data = $FH ) {
chomp $data;
my $record_output;
@records = split /\t/, $data;
foreach my $up_acs ( keys %{$up_map} ) {
foreach my $ensemble_id ( @{$up_map-{$up_acs}{'Ensembl_
 **TRS'}} ){
if ( $records[1] eq $ensemble_id ) {
$record_output = join( \t, @records );
print $OUT $record_output\n;
$accepted++;
}
else {
$rejected++;
next;
}
}
}
}
close $FH;
}
}
close $OUT;
closedir (DH);
print accepted records: $accepted\n, rejected records: $rejected\n;
return $output;
 }

 __DATA__

 TF210ENSMUST0001326SP1_MOUSEGS422
  ENSMUSG00379747148974877149005136Mus musculus
  MUC5AC14570593ELECTROPHORETIC MOBILITY SHIFT ASSAY
 (EMSA)::SUPERSHIFT
 TF211ENSMUST0066003SP3_MOUSEGS422
  ENSMUSG00379747148974877149005136Mus musculus
  MUC5AC14570593ELECTROPHORETIC MOBILITY SHIFT ASSAY
 (EMSA)::SUPERSHIFT


 Thanks a lot,

 Aravind

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi Aravind,

Even though I don't have time to go into the code and really find out why I
can instantly see a bad thing and in peudo code it looks like:
 Loop (1) {
  Loop (2) {
   Loop (3) {
   }
  }
 }
To me that looks like if I have to loop over 81 files like you said and
each of the underlying loops has 81 actions as well then I am doing
81x81x81= 531441 times what ever loop (3) does and 6561 times what ever
loop (2) does as well as 81 times what ever loop (1) does.

To make this a lot faster try and pull the loops appart. You are looping
over a directory, simply het the list and stop te loop, then open each file
seperately and only after youa re sure it is a useful file start processing
the contents of the file. Try and to limit as much as possible the
construct where you have a loop inside a loop.
You seem to be parsing the files line by line as you are saying they are
all very small you might want to pull them in in one go and then thread
them as one long string rather then processing them line by line.

Then there is one more thing I saw and don't quite understand you take your
hash and compare every key in your hash with each and every value in your
file. Why? Why not simply take the value in your file and see if it has a
matching key in the hash (a lot faster then your way). As you are only
comparing the two values you can simply ask the hash if key X exists if it
does you can process if not reject it instantly. As a hash is optimized for
this work you skip the need to loop over the entire hash every single time.

In very short if you want speed writting a loop in a loop in a loop in a
loop is never a good idea. If you have to a loop in a loop is possible more
thn that should be avoided as much as possible.

Regards,

Rob Coops


Re: parsing script help please

2012-05-31 Thread Rob Coops
On Thu, May 31, 2012 at 11:37 AM, nathalie n...@sanger.ac.uk wrote:



 Hi
 I have this format of file: (see attached example)
 1   3206102-3207048 3411782-3411981 3660632-3661428
 2   4481796-4482748 4483180-4483486


 and I would like to change it to this
 1   3206102-3207048
 1   3411782-3411981
 1   3660632-3661428
 2   4481796-4482748
 2   4483180-4483486 .


 I have tried with this script to create an array for each line, and to
 print the first element (1 or  2) with the rest of the line but the output
 don't seem to be right, could you please advise?
 #!/software/bin/perl
 use warnings;
 use strict;
 my $file=example.txt;
 my $in;
 open(  $in , '' , $file ) or die( $! );
 #open(  $out, txtout);


 while ($in){
next if /^#/;
my @lines=split(/\t/);
chomp;
 for (@lines) { print $lines[0],\t,$_,\n; };


 ouput
 1   1  i don't want this
 1   3206102-3207048
 1   3411782-3411981
 1   3660632-3661428
 1   i don't want this
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1

 1   1
 1   4334680-4340171
 1   4341990-4342161
 1   4342282-4342905
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1

 1   1
 1   4481796-4482748
 1   4483180-4483486
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1
 1

 1   1
 1   4797994-4798062
 1   4798535-4798566
 1   4818664-4818729
 1   4820348-4820395
 1   4822391-4822461
 1   4827081-4827154
 1   4829467-4829568
 1   4831036-4831212
 1   4835043-4835096

 many thanks
 Nathalie




 --
 The Wellcome Trust Sanger Institute is operated by Genome Research
 Limited, a charity registered in England with number 1021457 and a company
 registered in England with number 2742969, whose registered office is 215
 Euston Road, London, NW1 2BE.
 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/


Hi Nathalie,

Instead of using the split function I would personally go for a regular
expression as it allows for a lot more control over what you want to find.
Here is my solution...

#!/usr/local/bin/perl

use strict;
use warnings;

my $fh;

my %results;

open ( $fh, '', 'temp.txt' ) or die $!;
while ( $fh ) {
 chomp;
 my $line = $_;
 my $rownum = substr($line, 0, 1);

 my @othernumbers;
 while ( /(\d{7}-\d{7})/g ) {
  push ( @othernumbers, $1 );
 }

 $results{$rownum} = \@othernumbers;
}
close $fh;

use Data::Dumper;
print Dumper %results;

This should print the results below:

$VAR1 = '1';
$VAR2 = [
  '3206102-3207048',
  '3411782-3411981',
  '3660632-3661428'
];
$VAR3 = '2';
$VAR4 = [
  '4481796-4482748',
  '4483180-4483486'
];

And this is I believe where you wanted to go. Of course you could just
print it directly without the need for the temp variables etc but I assume
that you want to do something more with the found values then just dump
them on your screen.

Regards,

Rob


Re: Perl help

2012-04-05 Thread Rob Coops
On Tue, Apr 3, 2012 at 11:38 AM, Om Prakash oomprak...@gmail.com wrote:

 Hi all,

 I have some data which is like

 A:12
 B:13
 C: 14

 Now the data is line by line and multiple line with A B C is common though
 12 13 14 is changing, i want to take this data column wise, and the the
 value should come under A B C. Any help will be appreciated.

 Regards.../om
 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



So basically you have an imput like this:
A:12
B:13
C:14
B:2
C:54
A:1
A:34
etc...

Right?

And what you want it is a way to show:
A - 35
B - 15
C - 68

Right?

Ok lets just assume that that is what you are looking for as I am not 100%
sure based on your email.

Anyway what you are doing in that case is pulling them apart at first so
you should be able to identify the two sides the letter and the number
behind it.

use strict;
use warnings;

my %hash;

open $fh, '', 'input.file' or die $!;
while ( $fh ) {
 chomp;
 my ($letter, $number ) = split ( /:/, $_ );
 # So now we have the letter and number split for every line in the file
 # lets now start counting using the earlier declared hash
 # I'm not bothering with any error handling that I'll leave up to you
 $hash{$letter} += $number;
}

# Now once having looped over all lines we can close the file handle and
close ( $fh );

# To make my life easy I will simply spit out the whole hash in one go ans
let you see if that is what you are looking for as I have no idea thi is
actually correct
use Data::Dumper;
print Dumper %hash;

The result should look kind like this (order of the data might vary)
$VAR1 = {
  'A' = 35,
  'C' = 68,
  'B' = 15,
 }

Regards,

Rob


Re: Learning perl

2012-03-05 Thread Rob Coops
On Mon, Mar 5, 2012 at 4:35 PM, Shawn H Corey shawnhco...@gmail.com wrote:

 On 12-03-05 10:19 AM, lina wrote:

 Is the books wrote before 2006 a bit older, are there much changes in
 the last 10 years for perl?


 All changes to Perl are available via perldoc.

 `perldoc perl` and search for /delta/.

 `perldoc pelrdelta` gives the latest.


 --
 Just my 0.0002 million dollars worth,
  Shawn

 Programming is as much about organization and communication
 as it is about coding.

 It's Mutual Aid, not fierce competition, that's the dominate
 force of evolution.  Of course, anyone who has worked in
 open source already knows this.


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



In general it is a good idea to read the information about what changed
between then and now as there are some changes that will usually not break
older code but will be the difference between a Well written bit of code
and a Outdated bit of code but in general most if not all of the things
you find in a good book from 2006 should still work now in 2012.

As for a good book to begin with learning: Learning Perl
(ISBN: 1-4493-0358-7) also known as the camel book would be a good place to
start. Other things like websites such as: perl.com and perl-begin.org are
good resources to have a look at as well.
Besides that I personally learned most from finding a problem or a
relatively simple thing that for which I thought a perl solution would be
great. I then spend time and effort trying to figure out how to get this
work done with perl. Of course this is assuming that you have some
programming background already so you know how to tackle the work in what
ever other language, all you then need to do is figure out the way to do
this in perl.

The main thing specially when you are learning what ever language is to
have fun with it. Don't take on to much (you are not going to build the
next Google empire in Perl after a week or so of practice) and to keep on
asking questions when you run into something that you just can't get your
head around. That does not always mean mean ask on this mailing list it
could be as simple as using Google to find a good explanation that helps
you understand the how or the why. Of course the usually friendly people on
this mailing list are often happy to help.
The best way to get help is to show what you have done so far and to as
clearly as possible explain what it is that you want to do or are expecting
the code to do etc... the easier it is to understand what you are looking
for the more likely it is that people will be able to help you.

Regards,

Rob


Re: translate a file

2012-02-17 Thread Rob Coops
On Fri, Feb 17, 2012 at 4:32 PM, lina lina.lastn...@gmail.com wrote:

 Hi,

 I have a file,

  cat try.xpm
 a 1
 b 2
 c 3
 d 4

 abbbcdddb


 I wish to use perl to translate the last line into the numerical value.

 #!/usr/bin/perl

 use warnings;
 use strict;

 open FILE, try.xpm or die $!;

 my @line = FILE;

 while (FILE) {
print $_;
 }

 strangely it print me nothing out,

 Thanks for any suggestions,

 Best regards,

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi thats not that strange look at what you are doing...

#!/usr/bin/perl

use warnings;
use strict;
 # So far so good
open FILE, try.xpm or die $!; # Open the file no problem there though I
would use a variable instead of a handle like that but I'll leave that up
to others to complain about ;-)

my @line = FILE; # Read the entire file to @line

while (FILE) { # Read the FILE handle till the end of the file is reached
(you already got there in the previous step so this will instantly return)
   print $_; # Yeah that should work providing that you get into the
loop
}

Of course you could put the cursor back at the beginning of the file after
you shove it all into @line or you could remove that step. Another option
could be to loop over @line rather then over the file. Now there is one
thing to mention here and that is if you as it is called slurp a file into
memory as you do with the my @line = FILE; statement you might depending
on the size of the file get in problem with the memory available on your
system. Now a days most computers have GB's of memory but a lot
of operating systems limit the amount of memory they will normally allocate
to a process to a certain amount given that other programs also like to use
some of that memory so it is usually not a bad idea to have a good think
about if you really need to shove it all into memory or if you could live
with processing the data one line at a time storing only that information
that you really need rather then all the information available to you.

Regards,

Rob


Re: Executing remote command without using any modules

2012-01-18 Thread Rob Coops
On Wed, Jan 18, 2012 at 5:31 AM, charith charith...@gmail.com wrote:

 Hi All,

  I create following bash script to run some block of commands on remote
 server
  and get information but I try to do same using Perl but I couldn't
  make it so any one can suggest way to get done this using perl ? (please
 without using any modules)

  my .sh...


  ssh -T $LOGIN EOI
  cd /x02/oracle/downloads
  find ./ -type d | sed -e 's/[^-][^\/]*\//--/g;s/--/ |-/'
 DirectoryStructure.txt
  cat DirectoryStructure.txt
  exit
  EOI

 thanks


The simplest way to do this is to use the exec command it is not the right
way by any means but it should work most of the time. The main problems you
will find is that you cannot do much error handling, you would basically
only be able to see if your script executed SSH correctly or not and that
is all that you would be able to do... As far as output you should be able
to capture that but it would be just plain old flat text any other work you
might want to do with it you would have to do using regular expressions
etc...

All in all it is not a nice solution, but if you really cannot use modules
then you are likely stuck with it. I would advise you to have a look at the
Net::OpenSSH module. I know you said no modules but hear me
out... Net::OpenSSH is a pure perl module that doesn't have any mandatory
dependencies (obviously, besides requiring OpenSSH binaries). This means
that you could include it as a part of your program as a module that does
not need installing as there is no need to do this. Of course you will have
to go into the gutts of the module and find out how it calls the OpenSSH
binaries and make sure that you deal with a situation where the binaries
are in another location then the one on your machine and of course the case
where the are not installed at all. It is going to be a nasty thing and you
will have to deal with solving the same problem over and over again with
every new release of the Net::OpenSSH module you want to use... You also
will need to deal with stuff like outdated versions of OpenSSH and other
such wonderful situations.
Personally I would not go this route but it really depends on what you need
to do if all you need is just a directory listing the exec command might be
enough, if you are looking for more things and need to do proper error
handling then you will most likely need to find a way to deal with a module
anyway.

By the way, if you are using SSH to connect to remote machines why could
you not simply run this on a single machine? This would make your life a
lot simpler as getting a module installed on a single machine is a lot less
arguing with the administrator of the server park then getting it installed
on a lot of machines. After all if you are to use perl but have to live
without modules then what is the point... Perl without modules is like
C/C++ without external libraries it will work but it is going to be a
massive pain in the rear and likely a very difficult slow and error prone
job.

Regards,

Rob


Re: extracting email addresses from a file

2011-11-29 Thread Rob Coops
On Tue, Nov 29, 2011 at 4:13 PM, Rajeev Prasad rp.ne...@yahoo.com wrote:

 hello,

 i am trying to extract email address from a afile, but not quite
 succesful. here is what i have:

 the file:
 myterqlqt qntmrq Prqtesm qltul qzeez Smqik qltulqzee...@jmqil.com 976665
 myterqlqt qntmrq Prqtesm teepqk Mittql teep...@jmqil.com 939383
 Onjole qntmrq Prqtesm lmqrqtm Etqrq cont...@lmqrqteeyqm.orj 9889
 Vijqyqwqtq qntmrq Prqtesm Sitmqrtmq si...@msitmu.in 939775777
 Visqkmqpqtnqm qntmrq Prqtesm Smyqmprqsqt Mqntri 
 mumqnrijmts...@yqmoo.co.in9735566
 Wqrqnjql qntmrq Prqtesm Smqsmi qrjulq smqsmi.qrj...@jmqil.com 99799
 juntur qntmrq Prqtesm Rqvitejq Jqllepqlli rqvte...@jmqil.com 983
 jooty qntmrq Prqtesm Sqtti Kumqr  ys...@jmqil.com 986663,
 West jotqvqri (Eluru) qntmrq Prqtesm Rqm Prqsqt rqmprqsqttujji@yqmoo.com96 59
 Mqncmeriql qntmrq Prqtesm Smqntmilql jqmlotm 
 smqntmilql.jqmlotm@live.com933565 898575
 Kmqmmqm qntmrq Prqtesm Lqksmmqn Rqo jqtipqrtmy jqtipqrt...@jmqil.com
 Kurnool (Nemru Nqjqr) qntmrq Prqtesm lqntulmqi Iqlql mussqin
 limuss...@yqmoo.co.in 986, 8958575, 8958575



 my attempt:
 perl -ple 's/^.*\s(\w*@\w*.\w+).*$/$1/'  file

 my result:
 qltulqzee...@jmqil.com
 teep...@jmqil.com
 cont...@lmqrqteeyqm.orj
 si...@msitmu.in
 mumqnrijmts...@yqmoo.co
 Wqrqnjql qntmrq Prqtesm Smqsmi qrjulq smqsmi.qrj...@jmqil.com 99799
 rqvte...@jmqil.com
 ys...@jmqil.com
 rqmprqsqttu...@yqmoo.com
 Mqncmeriql qntmrq Prqtesm Smqntmilql jqmlotm 
 smqntmilql.jqmlotm@live.com933565 898575
 jqtipqrt...@jmqil.com
 limuss...@yqmoo.co


 please advise how should be my regex?

 thx.



Hi Rajeev,

Now making ar regular expression just for your example list I would say
this will work...

perl -ple 's/^.*\b(\w+@\w*.\w+)\b.*$/$1/' file

But I have done this before and I knwo that some people are funny and have
an email that looks like one of the below examples.

This is crap because of the dots dot...@website.at 234234234, 234523423
This is also crap because of the dots
dot.at@subdomain.website.at234234234, 234523423

You would get this out of your file:

qltulqzee...@jmqil.com
teep...@jmqil.com
cont...@lmqrqteeyqm.orj
si...@msitmu.in
mumqnrijmts...@yqmoo.co
qrj...@jmqil.com
rqvte...@jmqil.com
ys...@jmqil.com
rqmprqsqttu...@yqmoo.com
jqml...@live.com
jqtipqrt...@jmqil.com
limuss...@yqmoo.co
a...@website.at
at@subdomain.website

Not to good now is it... therefore I would suggest doing the following
pretty much fool proof trick

perl -ple 's/^.*\s(.+@.+.\w+)\s.*$/$1/' file

This way you are saying that you want to capture all data preceded by a
space then some stuff an @ some more stuff a dot and some more stuff. This
should capture pretty much all emails in your file without any real
restrictions on the formatting making it quite easy to capture everything
that looks like an email address. :-)

Regards,

Rob


Re: extracting email addresses from a file

2011-11-29 Thread Rob Coops
On Tue, Nov 29, 2011 at 4:54 PM, Sheppy R bobross...@gmail.com wrote:

 Couldn't you just use the non-whitespace character to capture everything
 before and after the @ symbol?

 s/^.*\s(\S+@\S+)\s.*$/$1/


 Yes you could of course but... this is why I was saying nearly no syntax
checking... the minor check to ensure that you have the . in there helps to
weed out the mystuff@someplace funny none email addresses.

The biggest problem with email addresses is that the rules of how an email
address can be formatted are so relaxed and thus so complex that there is
to the best of my knowledge not a single person that has ever managed to
create a 100% correct regular expression that checks if a string does in
fact match all criteria of a valid email address. One of the problems is
that even if you where to manage to create such a thing there are a few
possibilities in the specification that are valid but will not likely be
accepted by any email server or mail client.

Take for instance the following email address 1111@something...@
somewhere.info this is technically a valid email address but I can already
tell you that your mail client is likely to choke on it and the mail server
at somewhere.info will not like it much either. This is the problem with
the email addresses as they are used as opposed to the specification and
what that allows.

I would personally try and avoid doing any sanity checking on the emails
you filter out at least on the first pass... After all the reason you are
filtering emails addresses is because in the end it is better to have a few
none existing email addresses then miss a few valid once. The cost of
missing a email address is a order of magnitude bigger then the cost of
sending out an email that bounces because of a none existing email address.
Therefore I suggest you grab as much as you can on the first pass anything
that smells like an email address. The next step you can then filter
further to for instance only those that have a domain name that your DNS
server can lookup. Then the next step is to try and mail them and filter
the once out that bounce, so check your email account for any messages
stating that the receiving mail server does not know the account.
In the end you will have a list of valid existing email addresses that you
can then spam to no end or well what ever else your intention is with them
;-)

Regards,

Rob Coops


Re: $host = shift || $hostname; vs. $host = $hostname;

2011-11-24 Thread Rob Coops
On Thu, Nov 24, 2011 at 7:19 PM, JPH jph4dot...@xs4all.nl wrote:

 I found the script below at http://hints.macworld.com/**
 dlfiles/is_tcp_port_listening_**pl.txthttp://hints.macworld.com/dlfiles/is_tcp_port_listening_pl.txt

 I am trying to figure out what's happening at lines 20-23.
 Why is the author using 'shift ||' and not a plain $host = $hostname;

 Anyone to enlighten me?

 Thanks!

 JP

 ---

  1 #!/usr/bin/perl -w
  2 #
  3 # Author: Ralf Schwarz r...@schwarz.ath.cx
  4 # February 20th 2006
  5 #
  6 # returns 0 if host is listening on specified tcp port
  7 #
  8
  9 use strict;
  10 use Socket;
  11
  12 # set time until connection attempt times out
  13 my $timeout = 3;
  14
  15 if ($#ARGV != 1) {
  16   print usage: is_tcp_port_listening hostname portnumber\n;
  17   exit 2;
  18 }
  19
  20 my $hostname = $ARGV[0];
  21 my $portnumber = $ARGV[1];
  22 my $host = shift || $hostname;
  23 my $port = shift || $portnumber;

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



I'm not sure what the meaning is of this but the thing that is happening is
simple enough. You have @ARGV which contains [ 'A host name', 'A port
number']. On line 20 you set $hostname = $ARGV[0] = 'A host name' and on
line 21 you set $portnumber = $ARGV[1] = 'A port number'. So far so good,
then on line 22 you assign $host the contents of $_[0] or $hostname, and on
23 you set $host = $_[1] or $portnumber;

Now I am not familiar enough with the perl innards to fully understand the
logic behind this construction, but basically in this setup you will prefer
the use of @_ over the information in @ARGV after all the value after the
|| will only be used in case the shift argument results in an
undef assignment to $host or $port.

Not much clearer I'm sorry but that is as far as I understand it.

Regards,

Rob


Re: Installing CPAN Params::Validate and DateTime on Mac OS X 10.6.8

2011-10-18 Thread Rob Coops
On Tue, Oct 18, 2011 at 4:20 PM, Madrigal, Juan A j.madrig...@miami.eduwrote:

 Hi All!

 I'm having serious problems trying to install DateTime-0.70 along with
 pre-requisites Params-Validate-1.00
 on Mac OS X 10.6.8 via CPAN.

 I'm using the default install of Perl 5.10 (64bit). What jumps out to me
 is this: Error: no compiler detected to compile 'lib/DateTime.c'.
 Aborting

 I have gcc 4.2 installed and I've even reinstalled Xcode 4.0.2 and no
 luck. Here are the other errors:

 Warning: Prerequisite 'Params::Validate = 0.76' for
 'D/DR/DROLSKY/DateTime-0.70.tar.gz' failed when processing
 'D/DR/DROLSKY/Params-Validate-1.00.tar.gz' with 'make = NO'. Continuing,
 but chances to succeed are limited.
 Building DateTime
 Error: no compiler detected to compile 'lib/DateTime.c'.  Aborting
  DROLSKY/DateTime-0.70.tar.gz
  ./Build -- NOT OK
 Running Build test
  Can't test without successful make
 Running Build install
  Make had returned bad status, install seems impossible
 CPAN: Module::Build loaded ok (v0.38)
 Failed during this command:
  DROLSKY/DateTime-Locale-0.45.tar.gz  : make_test FAILED but
 failure ignored because 'force' in effect
  DROLSKY/DateTime-TimeZone-1.40.tar.gz: make_test FAILED but
 failure ignored because 'force' in effect
  DROLSKY/Params-Validate-1.00.tar.gz  : make NO
  DROLSKY/DateTime-0.70.tar.gz : make NO

 Any ideas?


 What flags would I need to build and compile a separate install of perl
 for Mac OS X 10.6.8 (64bit), say under /usr/local/bin/perl along with cpan?

 Thanks,

 Juan


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi Juan,

You might have GCC installed but do you have it configured?

Error: no compiler detected to compile 'lib/DateTime.c'.  Aborting

Seems quite clear to me there is no C compiler found, you might want to have
a look at your CPAN settings and see if your compiler is set there. Normally
assuming you had the compiler installed before running CPAN setup it would
automatically detect the compiler. By the sound of it you might have
installed the compiler after running the CPAN setup in which case it will
simply have no compiler listed and will likely throw an error like this.

The other errors seems to stem from this problem so resolving that should
most likely fix the rest of the errors as well.

Regards,

Rob


Re: Adding \.JPG$/\.jpg/ to a perlregexp

2011-10-17 Thread Rob Coops
On Mon, Oct 17, 2011 at 9:07 PM, Csanyi Pal csanyi...@gmail.com wrote:

 Leo Susanto leosusa...@gmail.com writes:

  On Mon, Oct 17, 2011 at 11:46 AM, Csanyi Pal csanyi...@gmail.com
 wrote:
  my goal is to get in a directory the following:
 
  PIC1.JPG renamed as Gyermekolimpia_Ujvidek_001.jpg
  PIC2.JPG renamed as Gyermekolimpia_Ujvidek_002.jpg
   ...
  PIC00223.JPG renamed as Gyermekolimpia_Ujvidek_223.jpg
 
 
  So far I get this to work:
 
  rename -n 's/\w\w\w\w\w/sprintf(Gyermekolimpia_Ujvidek_)/e' *.JPG
  PIC1.JPG renamed as Gyermekolimpia_Ujvidek_110.JPG
   etc.
  PIC00223.JPG renamed as Gyermekolimpia_Ujvidek_223.JPG
 
  This is good but I want to make the .JPG extensions lowercase.
 
  I know that the following command do this:
  rename -v 's/\.JPG$/\.jpg/' *.JPG
 
  but I want to know how to add
  \.JPG$/\.jpg/
 
  into the following command?
  rename -n 's/\w\w\w\w\w/sprintf(Gyermekolimpia_Ujvidek_)/e' *.JPG

  s/.+(\d{3}).jpg/Gyermekolimpia_Ujvidek_$1.jpg/i

 rename -v 's/.+(\d{3}).jpg/Gyermekolimpia_Ujvidek_$1.jpg/i' *.JPG

 It works perfectly. Can you explain to me how does it works?

 --
 Regards, Pal
 http://cspl.me


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



I'll try :-)

Lets split it in the various parts first:
.+
(\d{3})
. (this is not completely correct but we will get to that)
jpg

The first part basically tells the regular expression engine to find '.'
(any character) and to find it as often as it can +
The second step finds \d (a digit) exactly 3 times {3} and it is in brackets
which means these digits are captured for reuse later (we will get to that
as well). Thanks to the fact that this capture group is here the above .+
will not simply capture all characters but will stop due to the fact that
this bit matches the 3 digits in the file name.
The third bit where I said it is wrong is the . which captures any one
character after the 3 digits, it should probably have read \. which
restricts it to the . alone.
Then the last bit is no more then the extension preventing this from
matching some_file123.txt etc...

The next part is where we deal with the replacing of the previous file name
with the new one.

The first bit is no more then the new file name: Gyermekolimpia_Ujvidek_
Then the $1 which stands for the first capture group or (\d{3}) in our case
is placed here
The last bit places the extension there as .jpg

Then the one trick in all of this is the i at the end of the regular
expression which tells it to work in a case insensitive mode, so the capture
portion of your substitution that finds all files that match will find all
files where the last 3 charters are jpg regardless of how these are written.
So jpg or JPG or any other combination like JpG will be found providing that
the regular expression engine gets this far, it will stop trying to match as
soon as any of the elements above fails.

Regards,

Rob


Re: how to sort two array in perl

2011-10-14 Thread Rob Coops
On Fri, Oct 14, 2011 at 4:32 PM, Shlomi Fish shlo...@shlomifish.org wrote:

 On Thu, 13 Oct 2011 02:39:52 -0700 (PDT)
 Lemon lemon...@gmail.com wrote:

  Dear all,
 
  I want to sort data set like this
 
  (@a, @b)
  1,21,2
  7,89=  2,33
  54,787,89
  2,33   54,78
 
 
  I know that linux command sort can do these kind of things by sort -
  k1,1n, but  how can I do it by perl? I could not use hash because
  the first row may be repeat.
 

 If the data set is an array of array references you can use the || to
 chain
 comparisons:

 [CODE]
 my @sorted = sort { ($a-[0] = $b-[0]) || ($a-[1] = $b-[1]) }
 @a_and_b;
 [/CODE]

 If the two data sets are kept as separate arrays you can sort an array of
 indexes:

 [CODE]
 my @sorted_indexes = sort { ($arr1[$a] = $arr1[$b]) || ($arr2[$a] =
 $arr2[$b]) } (0 .. $#arr1);
 [/CODE]

 For more information see:


 http://perl-begin.org/tutorials/perl-for-newbies/part4/#page--and_or--sort--PAGE

 Regards,

Shlomi Fish

 --
 -
 Shlomi Fish   http://www.shlomifish.org/
 Escape from GNU Autohell -
 http://www.shlomifish.org/open-source/anti/autohell/

 There is no IGLU Cabal! None of them could pass the Turing test. But
 strangely
 enough a computer program they coded, could.

 Please reply to list if it's a mailing list post - http://shlom.in/reply .

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Another good source for explanations and a lot of answers to questions you
are sure to have at some point in time:
http://www.perlmonks.org/?node_id=15209

This one deals exactly with your question and suggests the same solution as
Shlomi does but spends a little bit more time on explaining this. (the link
Shlomi posted explains this as well of course ;-)

Regards,

Rob


Re: Regarding help in Threading

2011-10-06 Thread Rob Coops
On Thu, Oct 6, 2011 at 11:30 AM, Vishal Gupta
vishal.knit2...@hotmail.comwrote:






 Hi,

 I have to write a perl program (Parent script) which does the below 4 tasks
 simultaneously:

 1. Executing a perl script in one shell. (Parent script)
 2. Invoking a thread performing some tasks periodically lets say once in 15
 min, and send a message when any task is completed, to parent shell.
 2. Invoking a thread which is continuously checking the memory and cpu
 usage using top command and inform to parent script if it exceeds 50%.
 3. Doing ls -l for a directory (containing 4 files) per minute and check
 if size of any file increases.
 4. Invoking a shell script which has a execute in every 10 min and
 sending the result each time to file.

 The parent script has to execute continuously for 12 hours.

 Could you please tell me, how can i create this script ? Which concepts of
 Perl should I use for the above script? Please share with me the link, where
 can I get the tutorial/sample code of those concepts?

 Appreciate your help.

 Thanks  regards,
 Vishal



Hi Vishal,

I would say have a look at http://perldoc.perl.org/threads.html which is
what will help you do the trick. In very very basic terms you run a main
loop, which calls 4 different subs one for each task assigning each sub its
own thread. Then the main loop happily does what it name says it loops over
and over checking the output of the threads and kicking of new once at the
time interval your specified.

The whole thing would look a bit look a bit like this:


Start task 1
Start task 2
Start task 3
Start task 4
Start schedule thread

Main loop
 Check for output from thread 1
  Schedule a new start of thread 1
  Deal with output form thread 1
 Check for output from thread 2
  Schedule a new start of thread 2
  Deal with output form thread 2
 Check for output from thread 3
  Schedule a new start of thread 3
  Deal with output form thread 3
 Check for output from thread 4
  Schedule a new start of thread 4
  Deal with output form thread 4
loop

Your schedule thread, is there to make sure that when it is time to start
one of the threads you can fire it up again. Keeping the threads in a hash
so you can simply when you start a new instance of thread 1 assign this to
%hash( key=thread1, value=reference to your thread) this way your code for
dealing with the output from a thread does not have to be very complex in
terms of finding the thread you are looking for.

I would advise make each of the tasks work on their own as a one off, then
make the main loop and the starting of a thread work for just on thread and
grow the complexity from there. A firs time working wit threads can be a
little complex and starting with to many of them at once might just cause
you more trouble then its worth.

Regards,

Rob


Re: Sandboxing while I am learning

2011-08-30 Thread Rob Coops
On Tue, Aug 30, 2011 at 4:44 PM, Marc sono...@fannullone.us wrote:

 Sayth,

  So basically If I want to experiment and toy with different cpan apps
  and so forth I can without messing up my perl install.

 All you have to do is install cpanm for each version of Perl and
 then you don't have to worry about them stepping on each other.  With cpan
 configured for each Perl, I have different environments depending on the
 version and it works great.

This post may help:

 http://blog.fox.geek.nz/2010/09/installing-multiple-perls-with.html

 Marc
 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Personally I a ma big fan of virtual machines for this purpose I can install
many Perl versions, compile support of different database versions combine
with different apache versions and mods etc without having to even think
about what this might do to my Perl installation.

I keep my Perl installation clean and I can break what ever else I want by
messing with a virtual machine image of my machine. Once I am sure that what
I just did works as it should on the virtual machine I can copy the steps to
my real machine and execute the same steps knowing that it will work. Then
all you do is replace your virtual machine image and you can continue to
play on your machine without ever endangering your machine.

The multiple cpan installations is not bad it is dangerous in my opinion. I
have seen people go white as a sheet of paper once they realized that they
where not on the test but on the production machine and they just executed
an rm -rf on the application server directory...
The risk of such a simple mistake is even larger when all environments are
on a single machine. Therefore I would personally not advise this but that
might just be paranoid old me ;-)

Regards,

Rob


Re: Variable Assignment

2011-08-16 Thread Rob Coops
On Tue, Aug 16, 2011 at 8:27 PM, Joseph L. Casale jcas...@activenetwerx.com
 wrote:

 What is the correct way to quickly assign the result of a regex against
 a cmdline arg into a new variable:

 my $var = ($ARGV[0] =~ s/(.*)foo/$1/i);

 Obviously that's incorrect but is there a quick way without intermediate
 assignment?

 Thanks!
 jlc

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi Joseph,

I would suggest naming your selectors like so: $ARGV[0] =~
s/(?'string'.*)foo/$1/i;

Now you can access this variable by simply using $+{string} as your
variable. That way you can identify your variables directly and assign them
meaningful names. Certainly in a more complex regular expressions with many
variables being retrieved this can be very useful. Of course the draw back
is that on the next regular expression this $+{whatever} will get reset
and you will loose what ever you retrieved similar to $1 etc...

Regards,

Rob


Re: Perl script to retrieve specific lines from file

2011-08-11 Thread Rob Coops
On Thu, Aug 11, 2011 at 1:10 AM, Uri Guttman u...@stemsystems.com wrote:

  KS == Kevin Spencer ke...@kevinspencer.org writes:

  KS On Wed, Aug 10, 2011 at 4:04 AM, Rob Coops rco...@gmail.com wrote:
   #!/usr/bin/perl
  
   use strict;
   use warnings;
   use File::Slurp; # A handy module check it out at:
   http://search.cpan.org/~uri/File-Slurp-.19/lib/File/Slurp.pm

   KS While handy, be aware that you are slurping the entire file into
  KS memory, so just be careful if you're going to be processing huge
  KS files.

 in general i would agree to never slurp in most genetics files which can
 be in the many GB sizes and up. the OP says the file has up to 10M
 letters which is fine to slurp on any modern machine.

 uri

 --
 Uri Guttman  --  uri AT perlhunter DOT com  ---  http://www.perlhunter.com--
   Perl Developer Recruiting and Placement Services
  -
 -  Perl Code Review, Architecture, Development, Training, Support
 ---

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Believe it or not but I actually did count the number of zero's there ;-)

I know that bio data tends to be rather large but looking at the size i
figured it cannot hurt... though indeed if you are going for something more
substantial you will want to use a different method of reading the file that
reads the file in bits of 2MB at the time or so. Of course if you are
pulling out only characters X to Y and you are certain that there is nothing
but normal characters in the file you could simply start reading the file
from point X and continue to Y, there is no need to loop over the whole
thing 2M characters at a time. But beware that making such assmptions will
always lead to failure at some point as there will always be one file that
contains something else that you didn't expect. Even if that file does not
show up in testing in a few years and after a few hundered thousand files
you will at some point run into one. (it is the simple principle of
increasing your sample size eventually you will find a outlier in there)

Regards,

Rob


Re: Perl script to retrieve specific lines from file

2011-08-10 Thread Rob Coops
On Wed, Aug 10, 2011 at 12:43 PM, VinoRex.E vino...@gmail.com wrote:

 Hi every one
 i am a Biologist,i am having a DNA file having 2000 to 1000 letters. eg
 file:(ATGCATGCTAGCTAGTCGATCGCATCGATAGCTAGCTACGCG
 CGTACCGTGCAGAAGAGCAGGACATATATATTACGCGGCGATCGATCGTAGC

 GATCGATCGATCGCTAGCTGACTATGCATGCTAGCTAGTCGATCGCATCGATAGCTAGCTACGCGCGTACCGTGCAGAAGAGCAGGACATATATATTACGCGGCGATCGATCGTAGCGATCGATCGATCGCTAGCTGACTATGCATGCTAGCTAGTCGATCGCATCGATAGCTAGCTACGCGCGTACCGTGCA
 GAAGAGCAGGACATATATATTACGCGGCGATCGATCGTAGCGATCGATCGA
 TCGCTAGCTGACTATGCATGCTAGCTAGTCGATCGCATCGATAGCTAGCTACGCGCGTACCGTGCAGAA

 GAGCAGGACATATATATTACGCGGCGATCGATCGTAGCGATCGATCGATCGCTAGCTGACTATGCATGCTAGCTAGTCGATCGCATCGATAGCTAGCTACGCGCGTACCGTGCAGAAGAGCAGGACATATATATTACGCGGCGATCGATCGTAGCGATCGATCGATCGCTAGCTGACT)
 i calculated the total length of the sequence. what i want to execute is to
 extract and show the output only the specific(ie highlighted ones only).
 thank you



Hi there,

Here is what I suggest you do:

First always use strict and warnings, second show us what yuo have done so
far that will help show people that you are actually trying and not just
asking others to do your homework for you. ;-)

As for solving the problem here is what I would do:

#!/usr/bin/perl

use strict;
use warnings;
use File::Slurp; # A handy module check it out at:
http://search.cpan.org/~uri/File-Slurp-.19/lib/File/Slurp.pm

my $file_contents = read_file( 'your file' ); #Reads the whole file in one
go, please note that this includes the linefeeds etc... you might or might
not need to strip these out depending on how clean the files are.

my $total = length( $file_contents );

my $substring = substr 0, 10, $file_contents;

print Total number of characters: $total\nSub string: $substring\n;


That should do the trick of course change the 0, 10 to the starting position
of the substring and the total number of chars you would like to get have
returned. The trick for you would be to loop over the files, or allow for
command line or interactive feeding of the filename/path and the start
point/length of the substring you want to get out of the file.

Regards,

Rob


Re: AES Encryption

2011-08-04 Thread Rob Coops
On Thu, Aug 4, 2011 at 8:23 AM, SDA sday...@gmail.com wrote:

 I'm interested in using AES encryption in Perl.  I've tried installing
 Crypto from CPAN, but can't get it to work.  Any ideas? Modules...


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Can't get it to work, is a very ambiguous reason for a problem with a
module. The Crypto module is pretty much what you will be advised if you are
planning on doing cryptography in perl. If you provide the errors you are
getting, the OS and the perl version then it is quite likely that someone on
the list will provide you with an explanation what is wrong, why it is wrong
and how this can be corrected...


Regards,


Rob


Re: substring first 100 words from a string in perl

2011-07-28 Thread Rob Coops
On Thu, Jul 28, 2011 at 3:23 PM, Khabza Mkhize khabza@gmail.com wrote:

 I want to substring words, I might be using wrong terminology. But I tried
 the following example the only problem I have it cut word any where it
 likes. eg  breathtaking on my string is only bre.


  $string = This is an awe-inspiring tour to the towering headland
 known as Cape Point. Magnificent beaches, breathtaking views, historic and
 picturesque coastal ;

 $rtioverview = substr ( $string ,  0 , 100 );

 Reults = This is an awe-inspiring tour to the towering headland known
 as Cape Point. Magnificent beaches, bre;


 any one can help to solve this problem please?

 --

 Developer
 Green IT Web http://www.greenitweb.co.za
 http://www.greenitweb.co.za



Using a regular expression can help here...

#!/usr/local/bin/perl

use strict;
use warnings;

my $string = This is an awe-inspiring tour to the towering headland known
as Cape Point. Magnificent beaches, breathtaking views, historic and
picturesque coastal ;

my $num_words = 5;
if (my ($words) = $string =~ /((?:\w+(?:\W+|$)){$num_words})/) {
 print $words;
}

That should do the trick... you can then very simply pick the number of
words you want to return.
Another option is to split the thing into the various words and stick that
into an array:

my @words = split /\W+/, $string;

It depends on what you want to do with it of course... also considder doing
the following:

if (my ($words) = $string =~ /((?:\w+(?:\W+|$)){1..$num_words})/) {
 print $words;
}

Which will return you all words from 1 to the total number fo words so if
you input strign does not contain the total 100 or 5 or what ever other
number of words you are looking for at least you get the words that are
found.


Re: Increment UID Field

2011-07-20 Thread Rob Coops
On Wed, Jul 20, 2011 at 7:24 PM, Overkill overk...@sadiqs.net wrote:

 Greetings,

 I'm trying to increment the UID field of the unix password file from an csv
 file.  I've tried to insert C style increment and it keeps bomping out.
  What would be the logic to increment the 5009 to increment by one?  Thanks
 for any help.

 -Overkill

 #!/usr/bin/perl

 #use strict;
 #use warnings;

   while (DATA) {
   chomp;
($first, $last, $username) = split(,);
  print $username:x:5009:4001:$first $last:/home/$username:/bin/**
 false\n;
  }
  exit;

 __DATA__
 Bob,Ahrary,bahrary
 Jill,Anderson,janderson


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/


 #!/usr/bin/perl

use strict;
use warnings;

my $userid = 5009;
  while (DATA) {
  chomp;
   ($first, $last, $username) = split(,);
 print $username:x:$userid:4001:$first $last:/home/$username:/bin/**
false\n;
 $userid++;
 }
 exit;

__DATA__
Bob,Ahrary,bahrary
Jill,Anderson,janderson

That should do the trick... you could go for a: print $username:x: .
$userid++ . :4001:$first $last:/home/$username:/bin/**false\n; as well if
you like to keep it shorter. or change $userid++; to $userid +1; or even
$userid = $userid +1; depending on your personal preference. The result is
adding 1 to the $userid variable which is what you are looking to do :-)

Regards,

Rob


Re: Increment UID Field

2011-07-20 Thread Rob Coops
On Wed, Jul 20, 2011 at 9:53 PM, Shlomi Fish shlo...@shlomifish.org wrote:

 Hi Overkill,

 On Wed, 20 Jul 2011 13:24:06 -0400
 Overkill overk...@sadiqs.net wrote:

  Greetings,
 
  I'm trying to increment the UID field of the unix password file from an
  csv file.  I've tried to insert C style increment and it keeps bomping
  out.  What would be the logic to increment the 5009 to increment by
  one?  Thanks for any help.
 
  -Overkill
 

 don't parse CSV using regular expressions - use Text::CSV :

 http://perl-begin.org/uses/text-parsing/

 Regards,

Shlomi Fish

  #!/usr/bin/perl
 
  #use strict;
  #use warnings;
 
  while (DATA) {
  chomp;
   ($first, $last, $username) = split(,);
 print $username:x:5009:4001:$first
  $last:/home/$username:/bin/false\n;
 }
 exit;
 
  __DATA__
  Bob,Ahrary,bahrary
  Jill,Anderson,janderson
 
 



 --
 -
 Shlomi Fish   http://www.shlomifish.org/
 What Makes Software Apps High Quality -  http://shlom.in/sw-quality

 The prefix “God Said” has the extraordinary logical property of converting
 any
 statement that follows it into a true one.

 Please reply to list if it's a mailing list post - http://shlom.in/reply .

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



I'm willing to dispute that. :-)

For the simple reason that as long as you control the input to the CVS file
and can be certain that no UTF8 characters or any other mess sneaks in the
simple split using the /,/ regex will be faster and much simpler then the
Text::CVS module though in any situation where you do not control the input
or in the future might not have control over the input you should definitely
use it.

Based on the current data set however this module is pure overkill...

There is a fine line between a module being useful and being over kill if
you are creating a script that you might want to use in the future or share
with colleagues etc... then you should definitely use modules as usually
they cover all kinds of corner cases that you are unlikely to even think of
let alone cover in your code. But there are situations where that is simply
not needed. A simple example is a script I once wrote for an ISP to deal
with a cable company reorganizing their network, the input though not
controlled by me was bound by a lot of rules and I could without any worries
assume all kinds of things about my input based on the business rules
governing the naming of nodes bundles and locations in the network. Of
course these situations are not very common but in case you do encounter
such a situation it can be very handy to ignore the modules and go for a
much more limited but much more strict way of working that throws an error
on every corner case that should not happen based on the business rules. (as
long as you are prepared to get called out of bed at 3 at night when an
operator finds informs you that due to a decission made a little change in
the naming conventions and your script is rolling over and playing dead
because it was not informed about this...)

In general I do agree wit the usage of modules since in the end the modules
normally have seen many iterations and usually cover much more situations
the the once you can think of based on your typical working situaiton...
which means you are much more certain your script will be able to survive
even if in a few years form now you company moves to China and wants to use
chineese names rather then latin names; which would most likely break your
current script by the way :-)

Regards,

Rob


Re: error when installing WWW::Mechanize

2011-07-19 Thread Rob Coops
On Tue, Jul 19, 2011 at 11:34 AM, Agnello George
agnello.dso...@gmail.comwrote:

 
  Suggestions would be to install perlbrew and use it to install a more
 recent
  perl or just plain to force install, since everything else seems to be in
  working order.
 

 i did a force install  and gave me the following output

 Test Summary Report
 ---
 t/local/click_button.t (Wstat: 0 Tests: 19 Failed: 0)
  TODO passed:   15-17, 19
 t/local/nonascii.t (Wstat: 65280 Tests: 4 Failed: 0)
  Non-zero exit status: 255
  Parse errors: Bad plan.  You planned 5 tests but ran 4.
 Files=54, Tests=589, 24 wallclock secs ( 0.23 usr  0.04 sys +  7.04
 cusr  0.82 csys =  8.13 CPU)
 Result: FAIL
 Failed 1/54 test programs. 0/589 subtests failed.
 make: *** [test_dynamic] Error 255
   JESSE/WWW-Mechanize-1.68.tar.gz
   /usr/bin/make test -- NOT OK
 //hint// to see the cpan-testers results for installing this module, try:
  reports JESSE/WWW-Mechanize-1.68.tar.gz
 Running make install
 Installing /usr/lib/perl5/site_perl/5.8.8/WWW/Mechanize.pm
 Installing /usr/lib/perl5/site_perl/5.8.8/WWW/Mechanize/Examples.pod
 Installing /usr/lib/perl5/site_perl/5.8.8/WWW/Mechanize/Image.pm
 Installing /usr/lib/perl5/site_perl/5.8.8/WWW/Mechanize/Cookbook.pod
 Installing /usr/lib/perl5/site_perl/5.8.8/WWW/Mechanize/Link.pm
 Installing /usr/lib/perl5/site_perl/5.8.8/WWW/Mechanize/FAQ.pod
 Installing /usr/share/man/man1/mech-dump.1
 Installing /usr/share/man/man3/WWW::Mechanize::Cookbook.3pm
 Installing /usr/share/man/man3/WWW::Mechanize.3pm
 Installing /usr/share/man/man3/WWW::Mechanize::Link.3pm
 Installing /usr/share/man/man3/WWW::Mechanize::Examples.3pm
 Installing /usr/share/man/man3/WWW::Mechanize::FAQ.3pm
 Installing /usr/share/man/man3/WWW::Mechanize::Image.3pm
 Installing /usr/bin/mech-dump
 Appending installation info to
 /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/perllocal.pod
  JESSE/WWW-Mechanize-1.68.tar.gz
  /usr/bin/make install  -- OK
 Failed during this command:
  JESSE/WWW-Mechanize-1.68.tar.gz  : make_test FAILED but
 failure ignored because 'force' in effect

 ..
 would there be a issue when using the module in the future ..humm

 --
 Regards
 Agnello D'souza

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



There might be but based on the fact that the errors are reported in the non
ASCII part of the tests it is not highly likely to happen. I would suggest
make a mental note of this and should you run into trouble later on simply
see about upgrading Perl (if possible) or see if you can work out what the
tests are testing and why they would fail. You are often able to avoid
problems if you know exactly what causes them.

Personally I usually do not bother upgrading Perl unless there is some
serious error that's blocking me. The main reason is that you will have to
go over all your scripts and code to make sure that all of them still work
as intended. Even though there should be no difference it is a dangerous
assumption that things will always work as advertised. :-)

Regards,

Rob


Re: Using perl to send sms through siemens tc65

2011-07-18 Thread Rob Coops
On Mon, Jul 18, 2011 at 8:38 PM, Shlomi Fish shlo...@shlomifish.org wrote:

 Hi Matevž ,

 On Mon, 18 Jul 2011 15:21:09 +0200
 Matevž Markovič ivwcorporation.mat...@gmail.com wrote:

  Hy!
 
  First, I want to say hy to everyone! I am a fresh perl programmer (I
  learned some things in the past few days), who is happy to get to know a
 new
  language.
  What is my problem? Actually, I did not know where to post this, so I am
  just posting on this group (because I am after all a perl beginner). I
 work
  in a company, where they want me to configure a linux centos server, that
  would send and receive sms messages through the siemens tc65 terminal
  siemens mc65 receiver. My first goal is to get the terminal to send a sms
  message. I connected the tc65 to the computer and used minicom to check,
  whether the tc65 responds to AT commands.
  Well, it responded, but there is also my problem; I do not know, how to
  translate that into a perl application. I thought that a module like
  GSM::SMS would help me, but without any knowledge about such things, I
  cannot do much at this point.
 
  Can you please help me?
 

 Well, if you can do it manually using minicom, you can also probably do it
 in
 Perl using a serial port module:

 http://metacpan.org/release/Device-SerialPort

 I'm not familiar with GSM::SMS and have never sent SMS messages using Perl,
 so
 I cannot help you further, but it may provide a better interface than
 working
 with the serial port directly.

 To properly learn Perl, see http://perl-begin.org/ .

 Regards,

Shlomi Fish

  Matevž Markovič



 --
 -
 Shlomi Fish   http://www.shlomifish.org/
 Freecell Solver - http://fc-solve.berlios.de/

 rjbs sub id { my $self = shift; $json_parser_for{ $self }
-decode($json_for{ $self })-{id} } # Inside‐out JSON‐notated objects

 Please reply to list if it's a mailing list post - http://shlom.in/reply .

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Having had a quick look at the GSM::SMS module the intor says: *Out of the
box, it comes with a serial transport*, and a bit later: *A good start is to
read the docs for
GSM::SMS::NBShttp://search.cpan.org/~johanvdb/GSM-SMS-0.162/lib/GSM/SMS/NBS.pm
*

To me it looks like this module will do what you want but to be honest I
have no idea if it ill I have never used it. I would suggest having a look
at the example page. It looks quite straight forward from the examples I
have seen.

Still not much help I know but the good thing is that you have found your
way to CPAN and found a module that seems to do what you want it to do. Now
the only thing left to do is just giving it a try and see what it does. In
the worst case scenario you can always do as Shlomi suggested and have a go
at the serial port interface your self but using a purpose build module will
make your life a lot easier in most cases.

Regards,

Rob Coops


Re: XML suggestions

2011-07-13 Thread Rob Coops
On Wed, Jul 13, 2011 at 3:27 AM, Feng He short...@gmail.com wrote:

 2011/7/13 Doug Bradbury dbradb...@8thlight.com:
  Does anyone have suggestions for an active and easy to use XML building 
 parsing module?
 

 For a simple XML parsing I have used the module XML::Simple which just run
 well.

 Regards.

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Recently having done some work with various XML libraries in perl I have to
recommend: XML::LibXML the documentation is well how to put it not
fantastic... but once you get passed that you are looking at probably the
fastest and most powerful library for XML parsing available in perl.

It really depends on what you are looking to do though, if you are aiming
for just a simple thing with only a few messages then don't worry
about XML::LibXML
and go for XML::Simple which is more then enough in most simple cases ;-)
I found that both Twig and Treebuilder are great if you are looking at
modifying the XML (add extra nodes, alter existing nodes) but since they
build a full tree structure for the XML file you are working with depending
on the size and complexity of the file this could cause serious memory
bloat, and even out of memory errors.

Personally I think that if you can stomach it getting past the somewhat
difficult XML::LibXML documentation it will provide you the best and fastest
way of working with XML files, but I have to be honest it took me several
years before I found a project where I needed the speed and power of LibXML.
On the other hand once learned... if you have the time to do so have a go at
it you will enjoy working with it once you know how to.

Regards,

Rob


Re: Need help for interview

2011-07-08 Thread Rob Coops
On Fri, Jul 8, 2011 at 11:53 AM, pawan kumar pawan.be.ku...@gmail.comwrote:

 Hi,
 I  am BE(CSE) fresher.I got a call for interview and openings is in perl.I
 know the basics  of perl.Please let me know what i have to prepare for
 interview.Any link for frequently asked scripts??
 --
 K.V. PAWAN KUMAR


Good one...

I fear that it totally depends on the company and what they do. You will
certainly get the basic loops and a open file question. Then likely a print
to a file handle some juggling about with arrays and hashs, but other then
that who knows. I found that the questions often depend on what the team you
are interviewing for is working with most. A team that parses a lot of log
files will have more questions related to grep regular expressions (forgot
to mention those there will/should be a question about that) where as a team
that plays with databases over lunch will usually ask more questions related
to SQL, DBI and things like that.

In general the best thing to do is make sure you know the basics well enough
to be able to code some simple scripts unless they are looking for a perl
guru you usually are fine with just the basics. Besides they have your CV
they know you are not a perl expert yet... The most important thing is not
to try and talk your way out of it. If you don't know the answer to a
question try to explain conceptually how to deal with it rather then in
code. In the end the main thing they will be looking for is your skill as a
coder and your understanding of programming/scripting languages not your
years of experience with perl.

I have yet to walk into an interview where I have to write some real code
(beyond a one liner or something similarly short) so personally I would not
worry about that to much, though it might be different in different parts of
the world but in Europe I have not seen a request like that yet.

Good luck with the interview,

Rob


Re: understanding adding numeric accumulator

2011-07-08 Thread Rob Coops
On Sat, Jul 9, 2011 at 1:33 AM, J. S. John phillyj...@gmail.com wrote:

 Hi all,
 I'm teaching myself perl. Right now, I am stuck with this script. I
 don't understand how it works. I know what it does and how to do it by
 hand.

 $n = 1;
 while ($n  10) {
$sum += $n;
$n += 2;
 }
 print The sum is $sum.\n

 I know $sum is initially 0 (undef). I see that $sum becomes 1, then $n
 becomes 3. The loop goes back and then I don't understand. Now $sum is
 1+3?

 Thanks,
 JJ

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



First of all it is a really bad thing to not use warnings and strict in your
scripts just start your script with:

use warnings;
use strict;

and this script will end up screaming that you have to declare your
variables.

So your script would end up looking like this:

use warnings;
use strict;

my $n = 1;
my $sum; # Should actually be my $sum = 0; even if it is only for
readability but we shall let that slide for now ;-)
while ($n  10) {
   $sum += $n;
   $n += 2;
}
print The sum is $sum.\n

So what does the script do?

Well literally it says while $n is smaller then 10 execute what is in side
the { } brackets. Inside you say $sum = $sum + $n; and then $n = $n + 2;
So on the second loop you look at the value of $n which now is 1 + 2 = 3. So
then we take $sum which is 1 and add $n to that so 1 + 3 = 4 and we increase
$n by 2 ending up at 5.
The next loop (5 is smaller then 10 after all) $sum becomes 4 + 5 = 9 and $n
becomes 5 + 2 = 7
The next loop (7 is smaller then 10 after all) $sum becomes 9 + 7 = 16 and
$n becomes 7 + 2 = 9
The next loop (9 is smaller then 10 after all) $sum becomes 16 + 9 = 25 and
$n becomes 9 + 2 = 11
Then 11 is not smaller then 10 anymore an the loop ends the next thing you
do is print the value of $sum which should by now be 25.

To see if I am right why not modify the script a little so it will print the
inner values:

use warnings;
use strict;

my $n = 1;
my $sum = 0;
while ($n  10) {
   print '$sum = ' . $sum . ' $n = ' . $n;
   print '$sum += $n';
   $sum += $n;
   print '$sum = ' . $sum;
   $n += 2;
   print '$n + 2 = ' . $n;
}
print The sum is $sum.\n

Now it should show you exactly what it is doing inside the loop :-)

Regards,

Rob


Re: Different behaviour

2011-07-07 Thread Rob Coops
On Thu, Jul 7, 2011 at 11:42 AM, HACKER Nora nora.hac...@stgkk.at wrote:

 Hello,

 I would be really glad if someone could give me a hint to my following
 problem: I have a script that runs smoothly on AIX5.3, installed Perl is
 5.8.8. Now I need it to run on AIX6.1, Perl is 5.8.8 as well, but I
 experience a strange, differing behaviour. I have a sub-function that greps
 for a certain process in the ps list. In the course of this script this
 function is called twice. On AIX5.3, I get the same result (process running
 or not) both times, on AIX6.1 I get process running at the first call of
 the sub, but error -1 at the second call. Could this be an OS behaviour
 problem?

 Here are the code snippets:

 #
 # This is the main function of my script (doing DB backups/restore)
 sub backup {
#[... some other subs ...]
appl( 'start', 'D', checkStatus('D') ); # starts the Oracle DB if
 checkStatus returns that DB is not running; gives correct result
#[... some other subs ...]
my $dbvers = getDBVersion(); # gets the application version from the DB;
 on AIX6.1 this produces os level error -1
#[... some other subs ...]
 }

 #
 # This is the sub-function for getting the application version from the DB
 sub getDBVersion {
my $fnc = ( caller 0 )[$CALLER_ID];
DEBUG($fnc - DBType: $dbtype\n);
my $vdb = $empty;
my $adb = $empty;
if ( $dbtype eq 'V' ) { # in my case FALSE, ignore
# viaMG-DB temporär umspeichern, um Version aus ApplDB ermitteln zu
 können
$vdb = $db;
DEBUG($fnc - viaMG-DB: $vdb\n);
print 'Bitte zugehörige Applikations-DB angeben: ';
INFO($fnc - Bitte zugehörige Applikations-DB angeben: );
$adb = STDIN;
chomp $adb;
$db = $adb;
INFO($fnc - DB: $db\n);

# SID setzen
$ENV{'ORACLE_SID'} = $db;
INFO($fnc - ORACLE_SID: $ENV{'ORACLE_SID'}\n);
}

my $status = checkStatus('D'); # -- this FAILS at the second
 execution :-(
DEBUG($fnc - DBStatus aus Unterfunktion checkStatus: $status\n);

my $dbvers;
if ( $status == 0 ) {
my $dbverslst = $tmps/dbversion_$db.lst;
DEBUG($fnc - Tempfile mit DB-Version: $dbverslst\n);

system $sqlplus
  . ' /as sysdba '
  . \@$sqls/get_db_version.sql $db  /dev/null 21;
DEBUG(
$fnc - $sqlplus \/as sysdba\ \@$sqls/get_db_version.sql
 $db\n);

if ( fgrep { /ERROR/ } $dbverslst ) {
$dbvers = giveDBVersion();
} else {
DEBUG($fnc - Kein Error aus get_db_version.sql.\n);
}
my @dbversgrep;
my $fh_dbverslst = IO::File-new( $dbverslst);
if ( ! $fh_dbverslst ) {
LOGDIE($fnc - $dbverslst konnte nicht gelesen werden!\n);
}
while ($fh_dbverslst) {
chomp;
$_ =~ s/ //g;
if (/[\d.]/s) { push @dbversgrep, $_ }
}
undef $fh_dbverslst;

$dbvers = $dbversgrep[0];
INFO($fnc - Datenbank-Version (aus DB): $dbvers\n);
if ( -e $dbverslst ) {
unlink $dbverslst
  or INFO(
  $fnc - Datei $dbverslst konnte nicht gelöscht werden:
 $!\n);
}
} else {
INFO( $fnc - Datenbank $db ist down.\n );
print Datenbank $db ist down.\n;
$dbvers = giveDBVersion();
}

if ( $dbtype eq 'V' ) {
$db = $vdb; # viaMG-DB wieder als Haupt-DB einsetzen

# SID setzen
$ENV{'ORACLE_SID'} = $db;
INFO($fnc - ORACLE_SID: $ENV{'ORACLE_SID'}\n);

}

return $dbvers;

 }

 #
 # This is the sub-function for checking whether the Oracle DB (and/or
 Tuxedo) is running
 sub checkStatus {
my ( $art ) = @_;
my $fnc = ( caller 0 )[$CALLER_ID];
DEBUG($fnc - Übergabeparameter: @_\n);

my $status;
if ( $art eq 'D' ) {
system ps -ef |grep -v grep |grep dbw |grep -q $db;
my $test = `ps -ef |grep -v grep |grep dbw |grep $db`;   # print
 grepped ps list for debugging
print Test: $test\n;
$status = $?  $SHIFT_ERROR;
if ( $status == 0 ) {
INFO($fnc - DB-Status: $status = DB $db is running\n);
} else {
INFO($fnc - DB-Status: $status = DB $db is not running\n);
}
} elsif ( $art eq 'T' ) {
system ps -ef |grep -v grep |grep -w BBL |grep -wq $tux;
$status = $?  $SHIFT_ERROR;
if ( $status == 0 ) {
INFO($fnc - Tuxedo-Status: $status = Tuxedo $tux is
 running\n);
} else {
INFO($fnc - Tuxedo-Status: $status = Tuxedo $tux is not
 running\n);
}
}

return $status;
 }

 #
 # This is the script output with the grepped ps list output for debugging
 purposes:

 oracle:/opt/magna/wartung dbmove.pl -m s -d MVRSTDB
 Default-Tuxedo: mvrst
 Return code: 0
 Test:   oracle  299118   1   0 08:57:47  -  0:00 ora_dbw1_MVRSTDB
  oracle  

Re: Different behaviour

2011-07-07 Thread Rob Coops
On Thu, Jul 7, 2011 at 12:50 PM, HACKER Nora nora.hac...@stgkk.at wrote:

 **

 Hi Rob,

 Thanks for your reply and your suggestions.

So to make things really simple you basically make the following call:
***system ps -ef |grep -v grep |grep dbw |grep -q $db;* twice the
first time you get the list as expected the second time you get a -1 from
the OS. Would it be possible to capture more then just the -1 there
should (at least usually) also be a error message in human readable
format... that might provide you with a bit more insight into what the
reason is that the OS doesn't like you asking twice.


 Normally, the OS errors in text format are also printed on the command
 line, so I guess there is nothing more to display …!?


Personally I would simply to test repeat the call 10 times or so so
that in both cases it is executed 10 times. If the first 10 all working
without an issue and the second round throws errors then it certainly is 
 not
an OS issue but it must be that something else (on the OS level?) has
changed in the mean time. ps is a very basic command and I cannot imagine
what it might be that would cause that to trow an error, the grep command
might but only of if that $db variable is very strange. You might want to
try and print it before and after you make the actual system call just to
make sure that it does not magically change which looking at your code it
should not but then again the system call should not fail...

  Tried it, and the result is as follows – 10 times ok for the first round,
 10 errors for the second round. I also checked the $db variable, following
 your suggestion, and it is set identically both times:

 Default-Tuxedo: mvrst

 DB: MVRSTDB

 Return code: 0

 Return code: 0

 Return code: 0

 Return code: 0

 Return code: 0

 Return code: 0

 Return code: 0

 Return code: 0

 Return code: 0

 Return code: 0

 Status: 0

 DB MVRSTDB is running

 DB: MVRSTDB

 Return code: -1

 Return code: -1

 Return code: -1

 Return code: -1

 Return code: -1

 Return code: -1

 Return code: -1

 Return code: -1

 Return code: -1

 Return code: -1

 Status: 16777215

 DB MVRSTDB is not running

 But as you can see above I have no idea why this script might be
behaving as weird as it does. I do suspect that something OS related is
the cause of the weirdness but having never even logged in to an AIX 
 machine
I can only guess...

  Nevertheless thanks for your effort J

 Cheers,

 Nora

I guess that is clear then there is a definite change between the first and
the second attempt at this system call. The first time no mater how often
you all it you get the same result over and over again. The second time you
always get a return code -1. So it is the OS that for what ever reason sees
the second request as something quite different then the first...
I assume you have compared both out puts from: my $test = `ps -ef |grep -v
grep |grep dbw |grep $db`;   # print grepped ps list for debugging and found
no difference there? You can in both cases execute this on the command line
without issues...

Try making a different system call like 'echo Hello world' or something
that simple just to see if this does the same and fails on the second try
(it should) if it does I would have to say that your OS does not like your
program running the same system call more then once (for what ever reason)

Now if this was Solaris I would suggest a simple process trace to see
exactly what the OS was seeing in terms of requests and what it was doing to
cause it to choke on the second one. I don't know if this is possible on AIX
if it is this would be the best option to figure it out. Lacking a tool like
that I fear that I am at a loss as to how to even try and figure this one
out.

The Solaris process trace or ptrace as the tool is called allows you to see
every call made on the OS level (open file, read memory, access port etc...)
it allows you providing that you use more or push the output to a file to
see exactly what a process is doing and why it is waiting, or failing to
execute a certain call. It has helped me several times to find things like
locks on files (also system files) due to other processes creating exclusive
locks (seemingly random error explained) or issues with libraries that where
so far down the dependency chain that no one even knew they where ever being
called.

Regards,

Rob


Re: rmdir

2011-06-27 Thread Rob Coops
On Mon, Jun 27, 2011 at 3:31 PM, pa...@laposte.net wrote:

 Hi Fish,

 In my example (there is no metachar), the two forms of system call are the
 same.
 from perldoc -f system:

 If there is only one scalar
 argument, the argument is checked for shell metacharacters, and if there
 are any, the entire argument is passed
 to the system's command shell for parsing (this is /bin/sh -c on Unix
 platforms, but varies on other
 platforms). If there are no shell metacharacters in the argument, it is
 split into words and passed directly to
 execvp, which is more efficient.




  Message du 27/06/11 14:23
  De : Shlomi Fish
  A : pa...@laposte.net
  Copie à : Irfan Sayed , Perl Beginners
  Objet : Re: rmdir
 
  Hi Pangj,
 
  On Mon, 27 Jun 2011 14:07:51 +0100
  pa...@laposte.net wrote:
 
  
   Won't system rm -rf /path/to/dir just work for you?
  
 
  1. This is on Windows, so rm may not be available. perldoc -f system in
  general is not portable.
 
  2. Better use the list form of system:
 
  system(rm, -fr, $path);
 
  With the string version, you're risking code injection:
 
  http://shlomif-tech.livejournal.com/35301.html
 
  Regards,
 
  Shlomi Fish
 
  
  
Message du 27/06/11 13:59
De : Irfan Sayed
A : Shlomi Fish
Copie à : Perl Beginners
Objet : Re: rmdir
   
i did that but no luck
is there any another way ?
   
regards
irfan
   
   
   

From: Shlomi Fish
To: Irfan Sayed
Cc: Perl Beginners
Sent: Monday, June 27, 2011 5:05 PM
Subject: Re: rmdir
   
   
   
   
Shlomi Fish (shlo...@shlomifish.org) added themselves to your Guest
 List |
Remove them | Block them
  
   Une messagerie gratuite, garantie à vie et des services en plus, ça
 vous
   tente ? Je crée ma boîte mail www.laposte.net
  
 
 
 
  --
  -
  Shlomi Fish http://www.shlomifish.org/
  Why I Love Perl - http://shlom.in/joy-of-perl
 
  Chuck Norris writes understandable Perl code.
 
  Please reply to list if it's a mailing list post - http://shlom.in/reply.
 

 Une messagerie gratuite, garantie à vie et des services en plus, ça vous
 tente ?
 Je crée ma boîte mail www.laposte.net

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Ifran,

Could you please post some code so people could actually see what you have
tried or are trying. I know it seems simple and easy but clearly is is not
as simple as it seems as you can't get it working so please do not assume
that others will just suggest a fix for you.

Oh, and I suggest you read the documentation a little bit on rmdir... as it
holds some clues to what it is that you will want to be doing. Link to
documentation http://perldoc.perl.org/functions/rmdir.html it mentions
File::Path and rmtree as a solution to your problem.

Regards,

Rob


Re: rmdir

2011-06-27 Thread Rob Coops
On Mon, Jun 27, 2011 at 3:49 PM, Irfan Sayed irfan_sayed2...@yahoo.comwrote:

 exactly. some of the files have set the attribute as read only

 and due to that module was unable to delete those

 is there any module which will check if the file has read only attribute
 and if yes remove that attribute??

 plz suggest

 --irfan



 
 From: Shawn H Corey shawnhco...@gmail.com
 To: beginners@perl.org
 Sent: Monday, June 27, 2011 6:09 PM
 Subject: Re: rmdir

 On 11-06-27 08:29 AM, Irfan Sayed wrote:
  even i tried windows del command
  but it prompts for confirmation to delete the files
  do u know how to suppress these prompts ? i did not find any switch which
 allows you to suppress
 

 It sounds like some of the files or directories have their read-only bit
 set.

 What is the output of this:

 use File::Path;
 my $errors = [];
 rmtree( $dir, { error = \$errors });
 print $_\n for @$errors;

 BTW, I do believe the correct Windows command is:  del /s


 -- Just my 0.0002 million dollars worth,
   Shawn

 Confusion is the first step of understanding.

 Programming is as much about organization and communication
 as it is about coding.

 The secret to great software:  Fail early  often.

 Eliminate software piracy:  use only FLOSS.

 -- To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/


Have a look at the following: Win32::File (to get and set file attributes on
Windows)


Re: @{$var1{$var2}}

2011-06-22 Thread Rob Coops
On Wed, Jun 22, 2011 at 6:44 PM, josanabr john.sanab...@gmail.com wrote:

 Hi,

 I'm reading a program written in perl and I read this statement


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Without any more information I would have to say good, and what do you want
form the list? (it really does help if you formulate a question and ask what
it is that you want to know ;-)

I suspect hat you are wondering what this means right?

Lets dissect this a little:

Lets take the inner most thing ($var2) this is obviously a scalar (or a
reference to another variable (I'll explain why I am betting it is a scalar
in a bit))
Then we see the following: $var1{...} this is the way one accesses a
variable in a hash based on the key (the thing that goes between those
brackets). Usually the keys used in a has are scalars of course there is
nothing stopping anyone from using complex data structures as a key but it
is performance wise not the smartest thing to do.
The last bit then @{...} basically says treat what is in side the brackets
as an array (which is what one would do if one is expecting an array
reference to be returned from $var1's value associated with key $var2.

So what would the data structure look like?
{
 Hash key 1 = \[
   'Array value 1',
   'Array value 2',
   ...
  ],
 Hash key 2 = \[
   'Array value 1',
   'Array value 2',
   ...
  ],
 ...
}

Or in text form: $var1 is an hash containing keys associated with values
which are references to arrays.

I hope that explains things a little bit. :-)

Regards,

Rob


Re: malformed header from script. Bad header

2011-06-20 Thread Rob Coops
On Mon, Jun 20, 2011 at 10:32 AM, Khabza Mkhize khabza@gmail.comwrote:

 I am not sure if this problem is cause by print Content-type: text/html,
 \n\n;

 The main problem I have commented print Content-type: text/html, \n\n;
 since is printing line from cookies i Used

 On Mon, Jun 20, 2011 at 9:57 AM, Khabza Mkhize khabza@gmail.com
 wrote:

  I am receiving the following error on my error log
  I thought it was caused by Meta Tag content.
  The program was working fine for a long time its just few day I got this
 error and now it seem is consistent.
 
  [Sun Jun 19 22:42:12 2011] [error] [client 98.126.251.234] malformed
 header from script. Bad header=p: tourview1.pl,
 
  this is the program that generate this error
 http://www.followme2africa.com/cgi-bin/followme/scripts/public/common/tourview1.pl?Aqui01-DSAFCPT2
 
  Thank you
 
  Khabza
  Developer
  http://www.greenitweb.co.za
 
 


Hi Khabza,

I fear that following your link just provides the webpage (at least in my
browser it does). I have no idea what your code is doing or if there is an
error being thrown as I cannot see anything besides a normal looking
website.

Would it be possible for you to provide the list with the code that is
throwing the error and how the code needs to be called to generate the error
(does it always throw this error regardless of the variables it is provided
with?)

Regards,

Rob


Re: Convert spreadsheet to jpeg

2011-06-20 Thread Rob Coops
On Mon, Jun 20, 2011 at 9:57 PM, andrewmchor...@cox.net wrote:

 Hello

 I am not an expert in perl and so before I propose that a particular script
 be written I would likek to get an idea as to how hard the task would be.

 We have a large number (in my opinion) of excel spreadsheets that need to
 be concerted to jpeg format. The jpeg files are used when a document is
 created and the jpeg files are inserted. The spreadsheets are under
 configuration control and the current policy is that when a spreadsheet is
 updated the jpeg file is created (via a macro) and it too goes under CM
 control. I am considering proposing that a perl script could go through the
 directory structure and save each spreadsheet to a jpeg file and it would be
 more efficient and cost effective.

 Question:

 1. How hard is it to write a perl script that will open a spreadsheet and
 save it to a jpeg file?
 2,. Does this capability exist now?

 Thanks,
 Andrew

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi Andrew,

Opening a spreadsheet with perl is certainly a simple enough thing to do,
converting to a JPEG is a different story as far as I know as this would
require your perl code to be able to draw the excel documents content. As
far as I am aware this is not possible but I might be wrong. Now you might
be tempted to think that you could just execute the macro but that again
would require perl to fully understand the way MS visual basic works and has
access to all functions and methods that this visual basic does.

I think to be honest that in this case it is probably easier to create a
simple macro to open a specific directory or even a directory provided
trough a popup box, then open each excel file run a fixed macro in there,
safe the file and close the excel, rinse, repeat and you should have the
same thing as your perl script is supposed to do. As you are by the sound of
it creating another MS based document or maybe a PDF file which can be
created out of a MS based document which is then printed to a PDF... again
this can relatively easily be done with an VBA macro.

I know not very perly of me but I am one that believes in using a spade to
dig a hole and drill to drill one ;-)

Regards,

Rob


Re: virtual drive

2011-06-15 Thread Rob Coops
On Wed, Jun 15, 2011 at 8:49 AM, Irfan Sayed irfan_sayed2...@yahoo.comwrote:

 Hi,

 is there any way or module in perl to create the virtual drive in windows
 lets say , i have to create M: mapped to c:\test

 so i can use command subst M: c:\test


 but still , is there any better way

 regards
 irfan


Hi Ifran,

There should be a better way I am certain there is an API in Windows that
allows you to do this programatically. What you would then need to do is
call the API directly and avoid the command line mess. I am not sure if this
API is documented (I believe so as I vaguely remember reading it several
years ago), if it is you will find that it usually comes down to making a
simple call to the API, though this will be via a external library so it
will require some work and of course this is not something that will be very
portable or for that mater simple to write. So please do look at CPAN to see
if there maybe is already a module for this (I could not find one having
just had a quick look)

If there is no module for this yet please do share your implementation of
this API (if you decide to go the proper way) with the rest of us as it
would be very interesting to see how this is done.

Regards,

Rob


Re: how to use regexp to match symbols

2011-06-13 Thread Rob Coops
On Mon, Jun 13, 2011 at 2:05 PM, eventual eventualde...@yahoo.com wrote:

 Hi,
 I have a list of mp3 files in my computer and some of the file names
 consists of  a bracket like this darling I love [you.mp3
 I wish to check them for duplicates using the script below, but theres
 error msg like this Unmatched [ in regex; marked by -- HERE in m/only one
 brace here[ -- HERE anything.mp3/ at Untitled1 line 13.

 So how do I rewrite the regexp.
 Thanks.

 ## script ###
 #!/usr/bin/perl
 use strict;
 use warnings;
 use File::Find;

 my @datas = (test.mp3 , only one brace here[anything.mp3 ,
 whatever.mp3);

 while (@datas){
   my $ref = splice @datas,0,1;
   foreach (@datas){
   if ($ref =~/$_/){
  print $ref is a duplicate\n;
   }else{
  print $ref is not a duplicate\n;
   }
   }
 }


Escape the special character by using a \ so in your case you would
say: only one brace here\[anything.mp3 which the regular expression engine
will translate to:  only one brace here[anything.mp3 instead of  only one
brace hereOpen a caracter groupanything.mp3 which would mean you never
close the group and thus the regular expression is invalid and will throw an
error.

Regards,

Rob


Re: Check if words are in uppercase?

2011-06-10 Thread Rob Coops
On Thu, Jun 9, 2011 at 9:59 AM, Beware mathieu.hed...@gmail.com wrote:

 Hi all,

 i've a question on my perl script.

 In my script i read a file line per line, and check if keywords are in
 uppercase. To do that, i've an array filled with all used keywords.

 On each line, i check all keywords with a foreach loop.

 Well, this is my code :


 my $lines = 0;

 while ( SOURCE )
 {
   # cut '\n'
   chomp($_);

   #List of keywords
   my @keywords = (all, wait, for);

   #Check all keyword
   foreach $item (@keywords)
   {
  # keywords detected
  if ( /$item\b/i and !/\s*--/)
  {
 # remove keywords already in uppercase
 my $temp = $_;
 my $item_maj = uc($item);
 $temp =~ s/$item_maj//g;

 # check if  any keywords
 if ( $temp =~ /$item\b/i )
 {
print keywords is lowercase line : .$lines.\n;
last;
 }
  }
   }
   $lines++;
 }
 close ( SOURCE );

 Well, thanks for your answer


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Why make you life so hard?

Lets assume you will want some debug logging to know what where and how
often you replaced things...

while ( $line =~ /\b(?'keyword'keyword1|keyword2|keyword3|etc...)\b/i ) {
 print debug information ($+{keyword} contains the matched keyword);
 $line =~ /$+{keyword}/\U$+{keyword}\E/;
}

This will do the same as your for loop but it will use a single regular
expression that gets used only once if only a single keyword is present in
the line and never more often then the number of keywords in the line being
processed.

If you want to use an external array with keywords simply use that by doing
the following:

my $keywords = join '|', @keywords;
while ( $line =~ /\b(?'keyword'$keywords)\b/i ) { ... }

Regards,

Rob


Re: how do i push a hashref into another hashref

2011-06-07 Thread Rob Coops
On Tue, Jun 7, 2011 at 8:47 AM, Agnello George agnello.dso...@gmail.comwrote:

 HI

 I got the following hashref

  my $selet_domU_data = $DBH-selectall_hashref(select
 ram,ip,application,hosting,assigned_to,rdom0id from domU_info where
 server_name='$domU_server'  ,'server_name' );

  my $select_all_website = $DBH-selectall_hashref(select
 website_id,website_name from websites_name ,'website_id');

 now i need to push $select_all_website into $selet_domU_data

 my %hash1 = %$select_all_website;

 foreach (keys %$selet_domU_data) {
 push (@{$selet_domU_data-{$_}}, { rets = %hash1 );
 }


 print Dumper ([$selet_domU_data]);

 i also tried a combination of many other things but does not seem to work

 thanks in advanced


 --
 Regards
 Agnello D'souza



Hi Angello,

Could you maybe draw what you want the result to look like?

You are saying you have a hashref called: $selet_domU_data and one
called: $select_all_website right. Now both of them seem to be the result of
a database handle executing the query that you showed behind them. So both
of them will contain a hashref with in there a key (server_name, website_id
respectively) and all values fetched for these keys.

So eacy of them will look like this:
{
 HASH1_Key1 = { Column1 = '...', Column2 = '...', }
 HASH1_Key2 = { Column1 = '...', Column2 = '...', }
 ...
}
and
{
 HASH2_Key1 = { Column1 = '...', Column2 = '...', }
 HASH2_Key2 = { Column1 = '...', Column2 = '...', }
 ...
}

Combining them will end up with:
{
 HASH1_Key1 = { Column1 = '...', Column2 = '...', }
 HASH1_Key2 = { Column1 = '...', Column2 = '...', }
 ...
 HASH2_Key1 = { Column1 = '...', Column2 = '...', }
 HASH2_Key2 = { Column1 = '...', Column2 = '...', }
 ...
}

right?

Well then it should be simple enough:
foreach my $key ( keys %{ $hashref_a } ) {
 ${ $hashref_b }{ $key } = ${ $hashref_a }{ $key };
}

Basically loop over one of the two hashs and shove all it's key value pairs
into the other and you are done. If you don't need the initial hashref
anymore don't forget to clear it so perl will not keep the hash in memory
for any longer then you absolutely need to. (after all you never know how
big that database might be)

Regards,

Rob


Re: fastq file modification help

2011-06-06 Thread Rob Coops
On Mon, Jun 6, 2011 at 12:29 PM, Nathalie Conte n...@sanger.ac.uk wrote:

 Hi,

 I need to remove the first 52 bp sequences reads in a fastq file,sequence
 is on line 2.
 fastq file from wikipedia:A FASTQ file normally uses four lines per
 sequence. Line 1 begins with a '@' character and is followed by a sequence
 identifier and an /optional/ description. Line 2 is the raw sequence
 letters. Line 3 begins with a '+' character and is /optionally/ followed by
 the same sequence identifier (and any description) again. Line 4 encodes the
 quality values for the sequence in Line 2, and must contain the same number
 of symbols as letters in the sequence.

 A minimal FASTQ file might look like this:

 @SEQ_ID
 GATTTTTCAAAGCAGTATCGATCAAATAGTAAATCCATTTGTTCAACTCACAGTTT
 +
 !''****+))%%%++)().1***-+*''))**55CCFCCC65


 I have written this script to remove the first 52 bp on each sequence and
 write this new line on newfile.txt document. It seems to do the job , but
 what I need is to change my original bed file with the trimmed seuqence
 lines and keep the other lines the same. I am not sure where to start to
 modify the original fatsq.
 this is my script to trim my sequence :

 #!/software/bin/perl
 use warnings;
 use strict;


 open (IN, /file.fastq) or die can't open in:$!;
 open (OUT, newfile.txt) or die can't open out: $!;

   while (IN) {
 next unless (/^[A-Z]/);
   my $new_line=substr($_,52);
   print OUT $new_line;

 }


 thanks for any suggestions
 Nat


 --
 The Wellcome Trust Sanger Institute is operated by Genome Research Limited,
 a charity registered in England with number 1021457 and a company registered
 in England with number 2742969, whose registered office is 215 Euston Road,
 London, NW1 2BE.
  --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi Nathalie,

I am not 100% sure on this as I suspect that when modifying the original
file you also want to deal with that 4th line in which case I have no idea
how to deal with that as I do not understand what its purpose is

Anyway assuming that you are simply dealing with the 2nd line only life is a
whole lot simpler.
You know the number of characters you are removing is always 52 so that we
don't have to deal with that anymore. Now we can take various routes we
could take all characters from 52 till the end of the string (substr puts
the first character on position 0 not on position 1 ;-) or we could simply
cut out all characters before the 52nd. We could do the cutting using a
regular expression or we could use substr for this purpose (I have no idea
which one is faster please benchmark that if you are looking at a large
number of such operations to be executed it could save you a lot of time ;-)

Using substr to do all the work: my $new_line2 = substr $_, 0, 52, ;
Using a regular expression to do the work: my $new_line3 = $_; $new_line3 =~
s/[A-Z]{51}//;
Doing the counting thing...: my $new_line4 = substr $_, 52, length $_;

All 3 will provide you the result you are looking for I suspect that the
first one will be the fastest option, based on what little experience I have
with these types of opperations but please do prove this before you start
working on thousands of files...

Regards,

Rob


Re: timestamp with milisecond

2011-06-03 Thread Rob Coops
On Fri, Jun 3, 2011 at 9:38 AM, Anirban Adhikary anirban.adhik...@gmail.com
 wrote:

 Hi List
 Is it possible to get the current time_stamp with milisecond format

 like HH:MI:SS:NNN  DD:MM:(NNN is the milisecond)

 If yes then how can I achieve the same..

 Thanks  Regards in advance
 Anirban Adhikary.

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi Anirban,

Sure you can have a look at the following CPAN module:
http://search.cpan.org/dist/Time-HiRes/HiRes.pm

It will do exactly what you are looking for.

Regards,

Rob


Re: Parsing file

2011-06-02 Thread Rob Coops
On Thu, Jun 2, 2011 at 1:28 PM, venkates venka...@nt.ntnu.no wrote:

 On 6/2/2011 12:46 PM, John SJ Anderson wrote:

 On Thu, Jun 2, 2011 at 06:41, venkatesvenka...@nt.ntnu.no  wrote:

 Hi,

 I want to parse a file with contents that looks as follows:

 [ snip ]

 Have you considered using this module? -
 http://search.cpan.org/dist/BioPerl/Bio/SeqIO/kegg.pm

 Alternatively, I think somebody on the BioPerl mailing list was
 working on another KEGG parser...

 chrs,
 j.

  I am doing this as an exercise  to learn parsing techniques so guidance
 help needed.

 Aravind



 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



This is a simple and ugly way of parsing your file:

use strict;
use warnings;
use Carp;
use Data::Dumper;

my $set = parse(ko);

sub parse {
 my $keggFile = shift;
 my $keggHash;

 my $counter = 1;

 open my $fh, '', $keggFile || croak (Cannot open file '$keggFile': $!);
 while ( $fh ) {
  chomp;
  if ( $_ =~ m!///! ) {
   $counter++;
   next;
  }

  if ( $_ =~ /^ENTRY\s+(.+?)\s/sm ) { ${$keggHash}{$counter} = { 'ENTRY' =
$1 }; }
  if ( $_ =~ /^NAME\s+(.*)$/sm ) {
   my $temp = $1;
   $temp =~ s/,\s/,/g;
   my @names = split /,/, $temp;
   push @{${$keggHash}{$counter}{'NAME'}}, @names;
  }
 }
 close $fh;
 print Dumper $keggHash;
}

The output being:

$VAR1 = {
  '1' = {
   'NAME' = [
   'E1.1.1.1',
   'adh'
 ],
   'ENTRY' = 'K1'
 },
  '3' = {
   'NAME' = [
   'U18snoRNA',
   'snR18'
 ],
   'ENTRY' = 'K14866'
 },
  '2' = {
   'NAME' = [
   'U14snoRNA',
   'snR128'
 ],
   'ENTRY' = 'K14865'
 }
};

Which to me looks sort of like what you are looking for.
The main thing I did was read the file one line at a time to prevent a
unexpectedly large file from causing memory issues on your machine (in the
end the structure that you are building will cause enough issues
when handling a large file.

You already dealt with the Entry bit so I'll leave that open though I
slightly changed the regex but nothing spectacular there.
The Name bit is simple as I just pull out all of them then then remove all
spaces and split them into an array, feed the array to the hash and hop time
for the next step which is up to you ;-)

I hope it helps you a bit, regards,

Rob


Re: regexp validation (arbitrary code execution) (regexp injection)

2011-06-02 Thread Rob Coops
2011/6/1 Stanisław Findeisen s...@eisenbits.com

 Suppose you have a collection of books, and want to provide your users
 with the ability to search the book title, author or content using
 regular expressions.

 But you don't want to let them execute any code.

 How would you validate/compile/evaluate the user provided regex so as to
 provide maximum flexibility and prevent code execution?

 --
 Eisenbits - proven software solutions: http://www.eisenbits.com/
 OpenPGP: E3D9 C030 88F5 D254 434C  6683 17DD 22A0 8A3B 5CC0

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/


 Hi Stanisław,

From what you are saying I think you are looking for an option to take a
string and check it for any potential bad characters that would cause system
to execute unwanted code.

So a bit like this: In.*?forest$ is a safe string to feed into your
regular expression but: .*/; open my $fh, , $0; close $fh; $_ = ~/ is
an evil string causing you a lot of grief. At least that is how I understand
your question...

To be honest I am not sure if this is an issue as I suspect that the
following construction.
if ( $title =~ m/$userinput/ ) { do stuff... }
will give you any issues as far as I can remember the variable that you are
feeding here will not be treated as code by the interpreted but simply as a
matching instructions which would mean that what ever your user throws at it
perl will in the worst case return an failure to match.

But please don't take my word for it try it in a very simple test and see
what happens.

If you do have to ensure that a user cannot execute any code you could
simply prevent the user from entering the ; or smarter yet filter this out
from the user input, to prevent a smart user from feeding it to your code
via an method other then the front-end you provided. Without a means to
close the previous regular expression the user can not really insert
executable code into your regular expression. At least thats what I would
try but I am by no means an expert in the area and I suspect there might be
some people reading this and wondering why I didn't think of A, B or C if so
please do speak up people ;-)

Regards,

Rob


Re: Parsing file

2011-06-02 Thread Rob Coops
On Thu, Jun 2, 2011 at 4:41 PM, venkates venka...@nt.ntnu.no wrote:

 On 6/2/2011 2:44 PM, Rob Coops wrote:

 On Thu, Jun 2, 2011 at 1:28 PM, venkatesvenka...@nt.ntnu.no  wrote:

  On 6/2/2011 12:46 PM, John SJ Anderson wrote:

  On Thu, Jun 2, 2011 at 06:41, venkatesvenka...@nt.ntnu.no   wrote:

  Hi,

 I want to parse a file with contents that looks as follows:

  [ snip ]

 Have you considered using this module? -
 http://search.cpan.org/dist/BioPerl/Bio/SeqIO/kegg.pm

 Alternatively, I think somebody on the BioPerl mailing list was
 working on another KEGG parser...

 chrs,
 j.

  I am doing this as an exercise  to learn parsing techniques so guidance

 help needed.

 Aravind



 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



  This is a simple and ugly way of parsing your file:

 use strict;
 use warnings;
 use Carp;
 use Data::Dumper;

 my $set = parse(ko);

 sub parse {
  my $keggFile = shift;
  my $keggHash;

  my $counter = 1;

  open my $fh, '', $keggFile || croak (Cannot open file '$keggFile':
 $!);
  while ($fh  ) {
   chomp;
   if ( $_ =~ m!///! ) {
$counter++;
next;
   }

   if ( $_ =~ /^ENTRY\s+(.+?)\s/sm ) { ${$keggHash}{$counter} = { 'ENTRY'
 =
 $1 }; }

 While trying a similar thing for DEFINITION record, instead of appending
 current hash with ENTRY and NAME, the DEFINITION record replaces the
 contents in the hash?

 $VAR1 = {
  '4' = {
   'DEFINITION' = 'U18 small nucleolar RNA'
 },
  '1' = {
   'DEFINITION' = 'alcohol dehydrogenase [EC:1.1.1.1]'
 },
  '3' = {
   'DEFINITION' = 'U14 small nucleolar RNA'
 },
  '2' = {
   'DEFINITION' = 'alcohol dehydrogenase (NADP+)
 [EC:1.1.1.2]'
 },
  '5' = {
   'DEFINITION' = 'U24 small nucleolar RNA'
 }
};

 code: in addition to what you had suggested -
 if($_ =~ /^DEFINITION\s{2}(.+)?/){
   ${$keggHash}{$counter} = {'DEFINITION' = $1};

   }

   if ( $_ =~ /^NAME\s+(.*)$/sm ) {
my $temp = $1;
$temp =~ s/,\s/,/g;
my @names = split /,/, $temp;
push @{${$keggHash}{$counter}{'NAME'}}, @names;
   }
  }
  close $fh;
  print Dumper $keggHash;
 }

 The output being:

 $VAR1 = {
   '1' =  {
'NAME' =  [
'E1.1.1.1',
'adh'
  ],
'ENTRY' =  'K1'
  },
   '3' =  {
'NAME' =  [
'U18snoRNA',
'snR18'
  ],
'ENTRY' =  'K14866'
  },
   '2' =  {
'NAME' =  [
'U14snoRNA',
'snR128'
  ],
'ENTRY' =  'K14865'
  }
 };

 Which to me looks sort of like what you are looking for.
 The main thing I did was read the file one line at a time to prevent a
 unexpectedly large file from causing memory issues on your machine (in the
 end the structure that you are building will cause enough issues
 when handling a large file.

 You already dealt with the Entry bit so I'll leave that open though I
 slightly changed the regex but nothing spectacular there.
 The Name bit is simple as I just pull out all of them then then remove all
 spaces and split them into an array, feed the array to the hash and hop
 time
 for the next step which is up to you ;-)

 I hope it helps you a bit, regards,

 Rob



What you do: ${$keggHash}{$counter} = {'DEFINITION' = $1};
Try the following:   $keggHash}{$counter}{'DEFINITION'} = $1;

To make things a little clearer look at the following example.

my %hash;
$hash{'Key 1'} = { 'Nested Key 1' = 'Value 1' };

What you do is say: $hash{'Key 1'} = { 'Nested Key 2' = 'Value 2' }
What I do is: $hash{'Key 1'}{'Nested Key 2'} = 'Value 2'}

In your script you will end up with the following:
$VAR1 = {
 'Key 1' = {
  'Nested Key 2' = 'Value 2',
},
};

Where mine will result in:
$VAR1 = {
 'Key 1' = {
  'Nested Key 1' = 'Value 1',
  'Nested Key 2' = 'Value 2',
},
};

Not that much different but you are basically over writting the value (
{NAME=[], ENTRY=''} ) associated with your key ($counter) with {
'DESCRIPTION' = ''}. If you instead add a new key to the hash that is
associated with your main key ($counter) then you will get the result you
are looking for.

Regards,

Rob


Re: Parsing file

2011-06-02 Thread Rob Coops
On Thu, Jun 2, 2011 at 8:32 PM, venkates venka...@nt.ntnu.no wrote:

 Hi,

 Thanks a lot for the help, i had one more question. How can add diff values
 from multiple lines to the same hash ref? for example in the snippet data


 PATHWAY ko00010  Glycolysis / Gluconeogenesis
ko00071  Fatty acid metabolism
ko00350  Tyrosine metabolism
ko00625  Chloroalkane and chloroalkene degradation
ko00626  Naphthalene degradation

 I want it to stored in the following manner:

 2' = {
'PATHWAY' = {
  'ko00010' = 'Glycolysis /
 Gluconeogenesis'
  'ko00071' = ' Fatty acid
 metabolism'

},
 };

 Thanks,

 Aravind


 On 6/2/2011 5:06 PM, Rob Coops wrote:

 On Thu, Jun 2, 2011 at 4:41 PM, venkatesvenka...@nt.ntnu.no  wrote:

  On 6/2/2011 2:44 PM, Rob Coops wrote:

  On Thu, Jun 2, 2011 at 1:28 PM, venkatesvenka...@nt.ntnu.no   wrote:

  On 6/2/2011 12:46 PM, John SJ Anderson wrote:

  On Thu, Jun 2, 2011 at 06:41, venkatesvenka...@nt.ntnu.nowrote:

  Hi,

 I want to parse a file with contents that looks as follows:

  [ snip ]

 Have you considered using this module? -
 http://search.cpan.org/dist/BioPerl/Bio/SeqIO/kegg.pm

 Alternatively, I think somebody on the BioPerl mailing list was
 working on another KEGG parser...

 chrs,
 j.

  I am doing this as an exercise  to learn parsing techniques so
 guidance

  help needed.

 Aravind



 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



  This is a simple and ugly way of parsing your file:

 use strict;
 use warnings;
 use Carp;
 use Data::Dumper;

 my $set = parse(ko);

 sub parse {
  my $keggFile = shift;
  my $keggHash;

  my $counter = 1;

  open my $fh, '', $keggFile || croak (Cannot open file '$keggFile':
 $!);
  while ($fh   ) {
   chomp;
   if ( $_ =~ m!///! ) {
$counter++;
next;
   }

   if ( $_ =~ /^ENTRY\s+(.+?)\s/sm ) { ${$keggHash}{$counter} = { 'ENTRY'
 =
 $1 }; }

  While trying a similar thing for DEFINITION record, instead of
 appending
 current hash with ENTRY and NAME, the DEFINITION record replaces the
 contents in the hash?

 $VAR1 = {
  '4' =  {
   'DEFINITION' =  'U18 small nucleolar RNA'
 },
  '1' =  {
   'DEFINITION' =  'alcohol dehydrogenase [EC:1.1.1.1]'
 },
  '3' =  {
   'DEFINITION' =  'U14 small nucleolar RNA'
 },
  '2' =  {
   'DEFINITION' =  'alcohol dehydrogenase (NADP+)
 [EC:1.1.1.2]'
 },
  '5' =  {
   'DEFINITION' =  'U24 small nucleolar RNA'
 }
};

 code: in addition to what you had suggested -
 if($_ =~ /^DEFINITION\s{2}(.+)?/){
   ${$keggHash}{$counter} = {'DEFINITION' =  $1};

   }

if ( $_ =~ /^NAME\s+(.*)$/sm ) {
my $temp = $1;
$temp =~ s/,\s/,/g;
my @names = split /,/, $temp;
push @{${$keggHash}{$counter}{'NAME'}}, @names;
   }
  }
  close $fh;
  print Dumper $keggHash;
 }

 The output being:

 $VAR1 = {
   '1' =   {
'NAME' =   [
'E1.1.1.1',
'adh'
  ],
'ENTRY' =   'K1'
  },
   '3' =   {
'NAME' =   [
'U18snoRNA',
'snR18'
  ],
'ENTRY' =   'K14866'
  },
   '2' =   {
'NAME' =   [
'U14snoRNA',
'snR128'
  ],
'ENTRY' =   'K14865'
  }
 };

 Which to me looks sort of like what you are looking for.
 The main thing I did was read the file one line at a time to prevent a
 unexpectedly large file from causing memory issues on your machine (in
 the
 end the structure that you are building will cause enough issues
 when handling a large file.

 You already dealt with the Entry bit so I'll leave that open though I
 slightly changed the regex but nothing spectacular there.
 The Name bit is simple as I just pull out all of them then then remove
 all
 spaces and split them into an array, feed the array to the hash and hop
 time
 for the next step which is up to you ;-)

 I hope it helps you a bit, regards,

 Rob


  What you do: ${$keggHash}{$counter} = {'DEFINITION' =  $1};
 Try the following:   $keggHash}{$counter}{'DEFINITION'} = $1;

 To make things a little clearer look at the following example.

 my %hash;
 $hash{'Key 1'} = { 'Nested Key 1' =  'Value 1' };

 What you do is say: $hash{'Key 1'} = { 'Nested Key 2' =  'Value 2' }
 What I do is: $hash

Re: whats the purpose of use Data::Dumper;

2011-06-01 Thread Rob Coops
On Wed, Jun 1, 2011 at 6:07 PM, eventual eventualde...@yahoo.com wrote:

 use Data::Dumper;

 Hi,
 Can someone give me a few examples on the purpose of use Data::Dumper;
 I tried reading but I could not understand.
 Thanks


use Data::Dumper;

simply tells perl to load the Data::Dumper module. You can then use the
module to well dump data. :-)

Try the following, create a complex data structure something like this:

use Data::Dumper;
my $complex = [ { 'value1' = 'key1', 'value2' = [ 1, 2, 3, 4, 5 ] },
'Array value 2', '03', [ 'String 1', 'String 2', 'String 3' ], ];
print Dumper $complex;

and you will see something like this:
$VAR1 = [
  {
'value1' = 'key1',
'value2' = [
  1,
  2,
  3,
  4,
  5
]
  },
  'Array value 2',
  '03',
  [
'String 1',
'String 2',
'String 3'
  ]
];

Now it might seem silly at first but if you are dealing with large and
complex data structures that have a mix of arrays, hashs, strings,
references to data structures etc.. then you will soon come to love the
Data::Dumper for its very helpful output.

If you are not sure about loading external modules in perl please have a
look at: http://perl-begin.org/tutorials/perl-for-newbies/part3/ which will
give you a good beginner guide to what modules are and what they are used
for etc.

Regards,

Rob


Re: Can't call method findnodes on unblessed reference; not using OO Perl

2011-05-18 Thread Rob Coops
On Wed, May 18, 2011 at 10:37 PM, Kenneth Wolcott
kennethwolc...@gmail.comwrote:

 Hi;

  A colleague claims that he made no changes, code worked yesterday and
 doesn't today.

  He is not using OO Perl.

  I have asked him for a small code snippet that reproduces the error (I'm
 sure he is unwilling to show the entire code!).

  We have rules requiring the standard use of use strict and use
 warnings in all our Perl scripts.

  We use Perl on Windows, Linux and Solaris (all his scripts are supposed to
 run without modification on Linux and Windows).

  He claims this: use strict; use warnings; use XML::XPath;

 Trying to get value for:
 $ha = $xPath-findnodes('//job');

 Error:
 Can't call method findnodes on unblessed reference at file_name line
 line_number.

 Output from Data::Dumper follows:

 $VAR1 = {
  'object' = [
  {
'objectId' = 'job-21461',
'job' = {
 'priority' = 'normal',
 'status' = 'completed',
 'outcome' = 'success',
 'jobName' = 'DeleteBuilds-20091106',
 'jobId' =
 '21461',
 'lastModifiedBy' = 'admin',
 }
  },

  {

'objectId' = 'job-21473',
'job' = {
 'priority' = 'normal',
 'status' = 'completed',
 'outcome' = 'success',
 'jobName' = 'DeleteBuilds-20091107',
 'jobId' = '21473',
 'lastModifiedBy' = 'admin',
   }
  },
  ]
 }


Hi Kenneth,

I think the error is clear on what is going wrong: *Error: Can't call method
findnodes on unblessed reference at file_name line line_number.*
*
*
When you colleague calls: $xPath-findnodes('//job');

Perl tels him that $xPath is not an blessed reference. In human speak it
means he at some point said $xPath = new XML::XPath-new(filename =filename
); and this call failed... or he might have assigned a value to $xPath
destroying XPath the object that it was holding before.

Now lets assume your colleague isn't that silly that he would reassign
$xPath my guess is that the file cannot be read, does not exist or maybe is
locked. I suspect that your colleague is not wrong the code didn't change
but the environment did and now the Perl script does not have rights to read
the file.

Hope that helps,

Rob

ps, you might want to besides the rules to always use strict and warnings
also implement a rule that before opening of a file one always checks if it
exists, and can be opened for the purposes you need it for (read only will
not do if you want to write to it) also enforce a simple check to see if the
new module::xyz call actually did what was expected from it.


Re: Can't call method findnodes on unblessed reference; not using OO Perl

2011-05-18 Thread Rob Coops
On Wed, May 18, 2011 at 11:25 PM, Rob Dixon rob.di...@gmx.com wrote:

 On 18/05/2011 21:37, Kenneth Wolcott wrote:


 A colleague claims that he made no changes, code worked yesterday
 and doesn't today.

 He is not using OO Perl.


 You say later that he uses XML::XPath. That is an object-oriented module.


  I have asked him for a small code snippet that reproduces the error
 (I'm sure he is unwilling to show the entire code!).

 We have rules requiring the standard use of use strict and use
 warnings in all our Perl scripts.

 We use Perl on Windows, Linux and Solaris (all his scripts are
 supposed to run without modification on Linux and Windows).


 Hi Kenneth.

 All of the above may or may not be a proper assessment of your
 situation, but it has nothing to do with Perl. My assessment would be
 that your manager isn't doing his job, but also that you are bringing a
 personal conflict to a Perl list instead of to your manager. He is there
 to resolve things like this, and you haven't told him the relevant facts.


He claims this: use strict; use warnings; use XML::XPath;


 Until you have evidence otherwise, your colleague is telling the truth.


  Trying to get value for:
 $ha = $xPath-findnodes('//job');


 You have shown no code for the derivation of $xPath, or the declaration of
 $ha.


  Error:
 Can't call method findnodes on unblessed reference atfile_name  line
 line_number.


 Why are you hiding line_number from us when we have no code? That
 also makes sense with the rest of your mail, which shows that whatever
 you have dumped is unblessed.


  Output from Data::Dumper follows:

 $VAR1 = {
   'object' =  [
   {
 'objectId' =  'job-21461',
 'job' =  {
  'priority' =  'normal',
  'status' =  'completed',
  'outcome' =  'success',
  'jobName' =
  'DeleteBuilds-20091106',
  'jobId' =
 '21461',
  'lastModifiedBy' =  'admin',
  }
   },

   {

 'objectId' =  'job-21473',
 'job' =  {
  'priority' =  'normal',
  'status' =  'completed',
  'outcome' =  'success',
  'jobName' =
  'DeleteBuilds-20091107',
  'jobId' =  '21473',
  'lastModifiedBy' =  'admin',
}
   },
   ]
  }


 That looks fine, except that all you have printed is a hash of data. It
 isn't blessed and so it isn't an object.

 Please grow up and ask Perl questions. It looks to me as if you are as
 silly as each other. I certainly wouldn't employ either of you.

 I also wonder if you 'Kenneth Wolcott' and your friend are the same
 person. Since your name sounds English you embarrass me: the vast
 majority of Englishmen are far more professional than yourself.

 Rob


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



:-) Guess it is getting late in England as well then :-)

Anyway...

use strict;
use warnings;
use Ec;
use XML::XPath;
use Data::Dumper;

my $ec = Ec-new or die Can not create Ec object $!\n;
my $xPath;
$xPath = $ec-findObjects('job');
print Dumper($xPath);
#my $ha = $xPath-findnodes('//job');
#print Dumper($ha);

This makes no sense at all

use strict; # Good no complaints
use warnings; # Good no complaints
use Ec; # What is this a home brew module how is this relevant to the rest
of the code?
use XML::XPath; # XPath module
use Data::Dumper; # My all time favorite module

my $ec = Ec-new or die Can not create Ec object $!\n; # Creating a new Ec
object (what ever it might be) and storing this in $ec
my $xPath; # Declaring a variable called $xPath
$xPath = $ec-findObjects('job'); # Using the $ec variable to do something
and storing the returned value in the $xPath variable
print Dumper($xPath); # Dumping the $xPath variable looking at the above it
is a neat looking nested data structure
#my $ha = $xPath-findnodes('//job'); # Hang on a minute, now the $xPath
variable containing that data structure is used as an object and Perl goes
Bleh!!! not all that strange is it?
#print Dumper($ha);


Re: problem in creating a complex hash

2011-05-13 Thread Rob Coops
It might not look nice but I would do the following:

#!/usr/local/bin/perl

use strict;
use warnings;

my $arrayref = [ [ [ 'user1', 'c'], [ 'user2', 'a'], [ 'user2', 'b' ],[
'user2', 'd' ],[ 'user3', 'a' ],[ 'user2', 'f' ] ] ];

my %hash;
foreach my $arrayreference ( @{${$arrayref}[0]} ) {
 if ( ! defined $hash{${$arrayreference}[0]} ) {
  $hash{${$arrayreference}[0]} = { group = [ ${$arrayreference}[1] ] };
 } else {
  push @{${$hash{${$arrayreference}[0]}}{group}}, ${$arrayreference}[1];
 }
}

use Data::Dumper;
print Dumper %hash;

It prints:
$ perl test.pl
$VAR1 = 'user1';
$VAR2 = {
  'group' = [
   'c'
 ]
};
$VAR3 = 'user3';
$VAR4 = {
  'group' = [
   'a'
 ]
};
$VAR5 = 'user2';
$VAR6 = {
  'group' = [
   'a',
   'b',
   'd',
   'f'
 ]
};

Which is I believe what you are after right?

Regards,

Rob

On Fri, May 13, 2011 at 12:11 PM, Agnello George
agnello.dso...@gmail.comwrote:

 Hi All

 I have a small issue in arranging data with a array ref .

 $arrayref = [ [ [ 'user1, 'c'], [ 'user2', 'a'], [ 'user2', 'b' ],[
 'user2', 'd' ],[ 'user3', 'a' ],[ 'user2', 'f' ] ] ];


 i tried the following

 my %sh ;

 foreach my $i ( @$arrayref) {
 push (@{$sh{$i-[0]}},{group = [$i-[1]  } );
   }


 required hash

  %sh = (  user1 = { group = [  c ] },

user2 = { group = [ a b d f] },

  user3 = { group = [ a ] }
  )



 but i am not able to get it in this format .

 Can some one please help me out

 Thanks a lot


 --
 Regards
 Agnello D'souza

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/





Re: help me with a parsing script please

2011-05-12 Thread Rob Coops
You are almost there :-)

my ($helper1, $helper2);
my $counter = 1;
foreach my $line(@list){
 chomp $line;
  my @coordinates = split(/' '/, $region);
  my $chromosome = $coordinates[0];
  my $start = $coordinates[1];
  my $end = $coordinates[2];
  my $strand = $coordinates[3];
  # Using a simple modulo operation (returns 1 if counter is an uneven
number and 0 otherwise
  # you can simply decide the even and uneven lines on the uneven line
you capture the Chromosome and Start
  # and on the even lines you capture the End and the Strand, as well as
printing out the result of the
  # beginning of the previous line and the end of the current line.
  #
  # Using a for loop instead of a for each loop will result in a nicer
looking loop
  # and it might (never actually tested this be a little bit faster as
well (benchmark that to be sure)
  # which on large amounts of data as you are likely to be processing
might save you a
  # decent bit of time.
  if ( ! $counter % 2 ) { $helper1 = $chromosome $start; }
  if ( $counter % 2 ) { $helper2 = $end $strand; print $helper1
$helper2\n; }
  $counter++;
}

Hope that helps,

Rob

On Thu, May 12, 2011 at 11:23 AM, Nathalie Conte n...@sanger.ac.uk wrote:


 HI,

 I have this file format
 chrstartendstrand
 x 12241
 x24481
 1100124-1
 1124148-1

 Basically I would like to create a new file by grouping the start of the
 first line (12) with the end of the second line (48) and so on
 the output should look like this:
 x 12481
 1100148-1

 I have this script to split and iterate over each line, but I don't know
 how to group 2 lines together, and take the start of the firt line and the
 end on the second line? could you please advise? thanks

 unless (open(FH, $file)){
   print Cannot open file \$file\\n\n;
 }

 my @list = FH;
 close FH;

 open(OUTFILE, grouped.txt);


 foreach my $line(@list){
  chomp $line;
   my @coordinates = split(/' '/, $region);
   my $chromosome = $coordinates[0];
   my $start = $coordinates[1];
   my $end = $coordinates[2];
   my $strand = $coordinates[3];
 ...???



 --
 The Wellcome Trust Sanger Institute is operated by Genome Research Limited,
 a charity registered in England with number 1021457 and a company registered
 in England with number 2742969, whose registered office is 215 Euston Road,
 London, NW1 2BE.
  --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/





Re: special character regex match

2011-05-05 Thread Rob Coops
Ok let me try to understand the question.

You have a form (on an HTML page or inside an application, command line or
graphics?), this form contains a textarea that should not allow users to
freely enter text but the input should be scanned for bad characters, if
those are found the user should recieve a warning.

Regardless of the format the backend handling will be the same because you
have to assume a user will find a way to work around any frontend
limitations you impose and send the bad characters directly to your
backend application.

So lets see what we can do with characters:
/
%
$
###
space

use strict;
use warnings;

use Data::Dumper;
print Dumper check_input( '/' );
print Dumper check_input( '%' );
print Dumper check_input( '$' );
print Dumper check_input( '##' );
print Dumper check_input( ' ' );
print Dumper check_input( 'abcdefghijklmnopqrstuvwxyz' );
print Dumper check_input( 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' );
print Dumper check_input( '1234567890' );
print Dumper check_input( '!@^*()_+-=' );
print Dumper check_input( 'a A' );

sub check_input {
 my $input = shift; # Take the input from @_
 return 1 unless ( $input =~ /\/|%|\$|\#\#|\s/g ); # Return 1 if the input
is free of the bad characters
 return 0; # Return 0 the input contained a Bad character
}

I think that should do the trick... but as Rob D. said your question is not
very clear so it might not be what you are looking for at all.

Regards,

Rob


On Thu, May 5, 2011 at 12:28 PM, Rob Dixon rob.di...@gmx.com wrote:

 On 05/05/2011 10:56, Agnello George wrote:

 Hi

 I got a form with and users and insert in textarea   words like :

 /word_word2/wordword_word.txt
 /word1_word/wordword_word2.txt
 /word_word/word2word_word.txt

 but they should not be able to type the following  in the   text area

 /
 %
 $
 ###
 space


  unless ( $files =~ /^\s+$/ || /[\_\/\@\$\#]/) {

 print yes ;

 }


 I'm sorry, but I don't understand what your question is?

 Rob


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/





Re: inteligent DBI

2011-04-07 Thread Rob Coops
Even if it is not possible at the moment you could always overwrite this
function and make it work this way... I suspect it might be quite a bit of
work, but certainly not impossible. After all you are programming anything
you can think of can be done the only question is if it is worth the effort
to do this.

Personally I would simply create a function for this:
sub insert {
 my ( $query, $bind_params ) = @_;
 my $sth = $dbh-prepare( $query );
 $sth-bind_param( @{$bind_params} );
 $sth-execute
 .
 Return the results to the caller
}

Something like that should do the trick (the code above will most likely not
work but I assume you will understand where I am going with this...

Working with Perl in companies you will often see wrappers for DBI that do
things like what you are looking for or that provide other methods of
returning result sets (XML or JSON rather then Perl data structures) stuff
like that. I my self like to use a wrapper that simplifies the DBI interface
a bit so I basically create a DB object that offers me things like insert,
delete, update, execute, select, etc. This wrapper offers log4perl logging
capability and allows me to debug specific statements which means I can by
simply switching on tracing see exactly what query is being send to the
database. That helps a lot certainly if you are dealing with MySQL and their
rather useless way of reporting errors in a query. It also is a great way of
preventing SQL injection issues as you only need to deal with those kinds of
troubles once in the wrapper.

Personally I can say that a good wrapper around a relatively complex module
such as DBI can help a lot in saving time when you end up working with it
over and over again in different projects. The only thing is that if you are
going to be working in a larger team make sure that all team members buy
into the same wrapper and you do not end up with 12 implementations of a DBI
wrapper that all do slightly different things as you all optimized a
slightly different bit of your daily work. ;-)

Regards,

Rob

On Thu, Apr 7, 2011 at 11:01 AM, marcos rebelo ole...@gmail.com wrote:

 Probably it is impossible, but it would be really usefull.

 I'm doing something like:

 my $sth = $dbh-prepare('INSERT INTO my_table(field_1) VALUES (?)');
 $sth-bind_param( 1, 'pippo', { 'ora_type' = SQLT_BIN } );
 $sth-execute;

 since I need to pass a parameter in the bind_param

 I would really like to do something like:

 $dbh-do( 'INSERT INTO my_table(field_1) VALUES (?)', [ 'pippo', {
 'ora_type' = SQLT_BIN } ] )

 is there a way to do this?

 maybe with some DBIx

 Best Regards
 Marcos Rebelo

 --
 Marcos Rebelo
 http://www.oleber.com/
 Milan Perl Mongers leader https://sites.google.com/site/milanperlmongers/
 Webmaster of http://perl5notebook.oleber.com

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/





Re: Is CDB_File good for CDB in readonly

2011-04-04 Thread Rob Coops
Hi Ram,

CBD or Berkely DB is optimized for reading, it is meant to deal with large
volumes of simple non-relational basic data, so no images blobs, globs or
any of that stuff... besides that it will be fast very very fast.

To give you an example in the past I used to work on a Berkely DB driven
DHCP server for a ISP. The system held some 1.2 million leases for customer
systems and their modems. The system had a response times in the sub
millisecond range for every query, of course this was while being written to
by the backend systems provisioning new customers and updating others with
new leases or new subscription types.

With 10M records you might not find the response time a little slower (not
sure never tried that) but with todays systems and the amount of memory
available I have a feeling you will not really notice the response times
drop to much (if at all)

Anyway, another option might be to do all of this in memory and rather then
spend your time messing about with disks or even SSD's you could simply read
the CBD once into memory and then work from there. I would not start of with
that only bother with something like that if you get in trouble with
response times, but it will certainly help a awful lot.

As a last point have a look at the internals of the CBD module you are
using, if this is pure Perl you will most likely want to redo that in a
compiled language like C/C++, just to get that bit extras speed out of it.

I hope that helps, even though there is no real Perl in here my mail I think
it can still be useful.

Regards,

Rob

On Mon, Apr 4, 2011 at 8:47 AM, Ramprasad Prasad ramprasad...@gmail.comwrote:

 I have a CDB database with 1 Million to 10 Million records .. This is just
 a
 key where the app needs to lookup if an entry exists or not.  I dont need
 the value.  My app will be looking up the key_exists() almost 100 times a
 second

 What is the fastest way of implementin this

 I have a few questions

 1) Is CDB the best way of storing such data
 2) Is CDB_File optimized for readonly lookups
 3) Is it recommended to have an in memory hash .. If memory is not an issue
 ?


 --
 Thanks
 Ram
  http://www.netcore.co.in/




 n http://pragatee.com



Re: perl dd

2011-03-28 Thread Rob Coops
On Mon, Mar 28, 2011 at 10:18 AM, a b testa...@gmail.com wrote:

 Hi,

 I want to know if any module for dd in perl is available.

 Any pointers will a great help

 Thanks,
 a b


Have you tried having a look at search.cpan.org?
Oh, by the way what does dd stand for? Acronyms are great and all but unless
you are asking the guy next to you working with the same acronyms on a daily
basis it is unlikely that others will know what it is that you are talking
about.

Regards,

Rob


Re: perl dd

2011-03-28 Thread Rob Coops
I had a quick look but I can only see people making system calls using
system() or exec() etc... which sort of makes sense as there is no such
command on pretty much any system except for *nix systems. I think you might
simply have to write your own implementation or simply rely on system calls
to get this done. IT will mean though that you code will not be very
portable as it will only work on *nix systems and not on pretty much
anything else.

Regards,

Rob

On Mon, Mar 28, 2011 at 11:05 AM, a b testa...@gmail.com wrote:

 Hey Rob,


 Yes, you are right
 Thanks to mention this out.

 I was referring to unix dd command. was wondering if we have any module
 already available.

 i didn't found on cpan

 Thx
 a b

 On Mon, Mar 28, 2011 at 2:05 PM, Rob Coops rco...@gmail.com wrote:



 On Mon, Mar 28, 2011 at 10:18 AM, a b testa...@gmail.com wrote:

 Hi,

 I want to know if any module for dd in perl is available.

 Any pointers will a great help

 Thanks,
 a b


 Have you tried having a look at search.cpan.org?
 Oh, by the way what does dd stand for? Acronyms are great and all but
 unless you are asking the guy next to you working with the same acronyms on
 a daily basis it is unlikely that others will know what it is that you are
 talking about.

 Regards,

 Rob





Re: Using regex special characters from a string

2011-03-08 Thread Rob Coops
Hi Ben,

Not sure I get your point... but this is what it sounds like to me.

I have a script, and I want to feed it a special thing to let it know that
any character (A-Z or a-z does upper lower case matter?) is valid, but I
also want to use other characters at the same time. So ./script.pl -s ABC is
valid but also ./script.pl -s ABany characterDEF is valid.

In most operating systems this is done by the * or the ? character *
representing 1 or infinite characters an ? 1 character.

So to keep with the general style of things and keep the learning curve as
low as possible you could just say that * you do not use because this would
cause an infinite number of possible characters, combinations and
permutations which is kind of pointless unless you want the script never to
return any real results. So only ? then so something like this
./script.pl-s l??p would result in the words leap, loop, pool etc...

So all you need to do is simply say in case I see the character ? treat this
as [a-z] or [a-zA-Z] in your regex and you should be fine. A if ( $character
eq '?' ) { $character = '[a-zA-Z]'; } should do the trick.

I guess I am thinking to simple here or simply misunderstanding the problem
as it seems simple enough. It would work if you could for instance post a
bit of example code so the list can see what you are up to, sometimes a few
lines of code can say more then a thousand words ;-)

Regards,

Rob




On Tue, Mar 8, 2011 at 9:42 PM, Ben Lavery ben.lav...@gmail.com wrote:

 Hi all,

 I have a script which takes a string of alphabetic characters as an
 argument, generates all combinations of the characters and all permutations
 of the combinations, then looks up each result in a list of valid words, if
 the result is a valid word it gets stored in an array.
 I would like to be able to specify any alphabetic character from the
 command line.  Is there a clean way of doing this?  I thought that I could
 search the string for such characters and cycle through all legal
 combinations, but this does seem particularly clean...

 I've had a look about and found lots of things about using wildcards from
 the command line that the shell deals with, but nothing about using a
 wildcard or otherwise inside a script which was declared on the command
 line...

 Many thanks for your time,

 Ben
 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/





Re: multidimensional array check

2011-03-04 Thread Rob Coops
Hi Vito,

Ok so lets give this a try...

First lets deal with comparing the arrays then we can get to the size issues
and memory usage etc...

You have the data in an array which is good but not all that nice as after
all it will make for a lot of work if you want to compare two arrays (you
end up having to compare all elements in array 1 with all elements in array
2 which means with 100 elements in each array already 100x100 = 1
comparisons which if your data sets grow will only get worse.

So lets use a hash instead as this allows for a key value and used in a
intelligent way will save you a lot of comparing. Here is what I would do:

use strict;
use warnings;

my @G1 = ([alfa ,  10], [beta ,  11]);
my @L1 = ([alfa ,   10], [gamma ,  12]);
my @unique;
my @overlap;

my %hash;
foreach my $result ( @G1 ) {
 my $key = join '*|*', @{$result}; # Makes [alfa, 10] into alfa*|*10
 # @{$result} tells perl that $result contains an array
otherwise this whole trick would not work ;)
 $hash{ $key }++;
}

# We now see in %hash the following key value pairs alfa*|*10 = 1 and
beta*|*11 = 1
# all that is left is adding to the the other array and we will find the
over lap automatically

# Since we will never use @G1 again we can drop this
undef @G1; # Basically sets @G1 to undef and lets perl know we will never
use this data again so it is free to clean up the memory


foreach my $result ( @L1 ) {
 my $key = join *|*, @{$result}; #makes [alfa, 10] into alfa*|*10
 # @{$result} tells perl that $result contains an array
otherwise this whole trick would not work ;)
 $hash{ $key }++;
}
# %hash will now contain alfa*|*10 = 2 and beta*|*11 = 1
# finding out which once are unique is easy we simply remove all that have a
value greater then 1

# Since we will never use @L1 again we can drop this
undef @L1; # Basically sets @L1 to undef and lets perl know we will never
use this data again so it is free to clean up the memory

foreach my $key ( keys %hash ) {
 if ( $hash{ $key } == 1 ) { # This key only appeared once
  my @array = split ( /\*\|\*/, $key ); # Pull the key apart into the two
values (or 200k values if you have that many columns o course)
  # Now we add this to the @unique array
  push @unique, [@array];
 } else { # This is where we deal with the over lapping results
  my @array = split /\*\|\*/, $key; # Pull the key apart into the two values
(or 200k values if you have that many columns o course)
  # Now we add this to the @overlap array
  push @overlap, [@array];
 }
}

use Data::Dumper;
print Overlap\n;
print Dumper @overlap;
print Unique\n;
print Dumper @unique;

Thats easy enough right? Of course the perl purists will say use map, etc
but that would hide a bit to much about the logic of what is going on from a
beginner of course you should know that this ca all be done a lot faster and
more efficient etc. But for a beginner it is more important that you
understand the logic of it all then that you know the fancy tricks without
understanding how they work.

Anyway I think this is what you are looking for... A DB link is always a
hard thing to get approved I know that one besides depending on the location
of the databases you might not gain all that much by having the database
deal with this for you due to network latency and such things. If you did
want to do this on the DB level look at the oracle MINUS command for the
easiest way to implement this though there are others of course ;-)

I hope this helps you a bit, regards,

Rob

On Fri, Mar 4, 2011 at 12:30 PM, mammoccio vito.pasc...@gmail.com wrote:

 First of all, thank for u support! Really !

  The most efficient way would be to arrange to have the Oracle database
  engine do most of the comparisons. I am not enough of a database expert
 to
  recommend ways to do this.
 I agree with u , but it's something that is not possible without a
 dblink, that's why I'm here to write some code :)
  The fastest way to do this in Perl would be to save the results of one
 query
  in memory in a data structure of some type, either an array or a hash.
 Then,
  as the results from the second query are fetched, compare against the
 copy
  in memory and save what differs (you have not explained how to decide
 when
  the two results differ).
 That's exactly what I was trying to do,and u are right probably I was
 not so clear about how to decide  when the two results differ.
 Let's try to put some light on:

 The results of the first query will be an array something like this:

 @G1 = ([alfa ,  10] ,
[beta ,  11]);

 Similar the second query will give this kind of array:

 @L1 = ([alfa ,   10],
[gamma ,  12]);

 And finally the third query:

 @G2 = (gamma);


 G1 and G2 are two query on the same db and same schema too.
 L1 is on another db server and of course on a different schema.

 So what I need is to discharge the results that I found in all the three
 queries and take only 

Re: multidimensional array check

2011-03-04 Thread Rob Coops
You are now talking about the finer points of the DBI module... :-)

The simplest way of doing what you want is this:

my $tbl_ary_ref = $query_handle-fetchall_arrayref;

Here though you will need a bit of background as this does exactly what you
are looking for but with a twist. I'll try to explain...

In perl every value you store no mater if it is a scalar ($x = 1) an array
(@x = [0]) or an hash (%x = { 'key' = 1}) they all are stored in your
computers memory. And when you say my $y = $x; what perl will do is make a
copy of $x and call that $y. Now you are taking twice as much memory which
is not a problem if $x has a value of '1' but if $x contains the collected
works of 19th century poets then this can be quite expensive in terms of
memory usage.
So there is another way to achieve the same instead of saying my $y = $x;
you can say my $y = \$x; what perl does in this case is fill $y with a
reference to $x. This reference tells perl where in the memory $x is stored.
this way you save a lot of memory and still have two variables with
identical values in them. Now I suspect that you are already thinking wait a
moment if $y only points to $x and I change something in $x will this not
influence $y, and you are right. For what we are going to be doing this is
not that important but it is good to know for trouble shooting strange
behavior in your future scripts. ;-)
Of course what you can do with scalars you can do with arrays and hash's as
well just as easy.

So what $tbl_ary_ref will contain results as a reference to array which
contains references to arrays for each row so basically [ [column 1,
column 2], [column 1, column 2]]. The only thing is you will need to
access this data...
Remember the comment I made about the @{$result} comment in my previous
email thats how it is done, perl sees only the reference so what we did
there is we told perl that it should treat this $result as an array.

So in your case to get to the actual value of column 1 on row 5 of the
result set you would write something like this:

my $row_ref = ${ $tbl_ary_ref }[4]; # Get to the right row (number 4 as perl
arrays start counting at 0 not at 1)
my $column_value = ${ $row_ref }[0]; # Using ${}[0] notation as @{}[0] just
like @array[0] will result in a warning though it will still work for
compatibility reasons

or of course if you want to be fancy:

my $value ${ ${ $tbl_ary_ref }[4] }[0]; # This will provide the same result
as above but is far less readable and not all that desirable because of it

A last point that you will want to know when you start working with
references is that you can always dereference something (basically make a
copy of what ever the reference is pointing to. So you can for instance do
this:
my @array = [ 'an array' ];
my $array_ref = \@array;

my @copy_array = @{$array_ref};

Doing this means that you can safely make changes to the @copy_array without
having to worry about making a mess of the array that $array_ref is pointing
to. It will how ever cause you to have that array in memory twice, so there
is a price to pay for that.

So you should now be able to retrieve the whole result set as a reference to
an array containing references to arrays. And I hope that
my ramblings explained a bit how you can use that reference to get to the
underlying values. If you combine all of that I think you'll be able to work
out the way to get your program to work the way you want it to.

By the way I would if I where you pick a slightly less complex task to learn
a programming language with next time, this one is though interesting
certainly not an easy one to use as a way to learn the language. :-)

Regards,

Rob

On Fri, Mar 4, 2011 at 3:25 PM, mammoccio vito.pasc...@gmail.com wrote:

 Il 04/03/2011 13:26, Rob Coops ha scritto:

 First of all, really tnx Rob! I really appreciate u way to teach (even
 to folks with no programming background like me) the logic behind!


 I know that maybe this is a silly question but how can I transform the

 my @G1 = ([alfa ,  10], [beta ,  11]);

 In something that take data from the oracle select statement  ?

 I mean I tried something like:

 my @G1 = $query_handle-fetchrow_array

 But it simply don't works

 I was at this point before and I got no solution because I always see
 example that using something like

 while (my @G1 = $query_handle-fetchrow_array)
 { do something
 };

 and I didn't found a simply solution to put stuff in a array...

 Sorry to bother with so silly question, but really I want to learn how
 to do it!

 --
 Vito Pascali
 ICT Security Manager
 IT Senior System Administrator
 vito.pasc...@gmail.com




Re: multidimensional array check

2011-03-04 Thread Rob Coops
You are very right a really nice explanation of this you can find in the
Advanced Perl Programming book by Oreilly see the link below

http://oreilly.com/catalog/advperl/excerpt/ch01.html

If you already went trough the other two then this is definitely a good one
to read as it goes that extra step and provides you with a good insight of
the underlying logic.

On Fri, Mar 4, 2011 at 7:30 PM, Brian F. Yulga byu...@langly.dyndns.orgwrote:


 Rob Coops wrote:


  So you should now be able to retrieve the whole result set as a
  reference to an array containing references to arrays. And I hope
  that my ramblings explained a bit how you can use that reference to
  get to the underlying values. If you combine all of that I think
  you'll be able to work out the way to get your program to work the
  way you want it to.


 FWIW, I found your ramblings quite helpful, since I am working to get more
 comfortable with variables, arrays, and references, on my way to learning
 objects.

 Coming from a basic understanding of C, the reference variable concept
 sounds a lot like pointers in C.  But, my Learning Perl and Intermediate
 Perl books do not use that terminology.  Am I on a correct path to
 understanding by making this analogy?

 Thanks,
 Brian




Re: perl's threads

2011-02-11 Thread Rob Coops
I have to agree I missed things a bit there :-(

forks are simply a better way of dealing with threads then the default perl
ithreads, the benefits forks offer goes far beyond the minimal drawbacks. By
now the notes about modules that still insist on using threads rather then
forks will for most users just be informative as the better solution has
been available for long enough for most module maintainers to switch.



On Thu, Feb 10, 2011 at 11:05 AM, Dr.Ruud rvtol+use...@isolution.nl wrote:

 On 2011-02-09 14:03, Rob Coops wrote:

  But any developer that tells you that threads are not worth the
  trouble they cause or anything along those lines is clearly an
  experienced programmer who unfortunately has not realized yet
  that the computer world has changed incredibly over the past
  few years.

 Rob, from your reply I understand that you didn't pick up that it is about
 the forking/threading difference. (So it is not about doing things in
 parallel or not, as you seem to think.)

 This discussion is about why threads are (mostly) wrong, and forks are
 (basically) right, when it comes to parallelization.

 --
 Ruud


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/





Re: perl's threads

2011-02-09 Thread Rob Coops
Just adding a little to this discussion then. :-)

Threads regardless of the language are meant to do multiple things in
parallel. For instance I'm at the moment working on a tool that extracts
vast amounts of data from a web service as the calls to that service are
pretty much independent and the data on my side is first stored in a
database before further processing (using separate tables for each of the
calls) there is very little point in doing these actions sequential.
After all both the database and the webservice are fully capable of handling
hundreds or even thousands of requests at a time without fail so why not run
things in parallel?
I found that even with a minimal amount of data I can speed up the process a
lot by simply creating an object that deals with the database and using this
as a base class to build the various API calling objects on. Each object
then is fully self contained deals with the retrieval of its own data
massaging that in a format that fits in its own tables and uses its parents
code to deal with the database insertion.

It depends on your intended usage of threads, if you are thinking along the
lines of a multi threaded process that operates on shared data, where the
threads are constantly sharing outcomes and using the results other threads
have produced to continue their processing so they can supply the yet other
threads with data... think again. Though of course possible in Perl it is
also an very very complex thing to do and the performance will simply not be
there at least not when comparing to other solutions.
But as said for simple parallel and independent actions using threads in
Perl is not such a bad idea.

The idea that threads are always bad regardless of programming language or
are bad i many languages is simply not true. Threads certainly have a place
in programming and in many if not all programing languages. With the many
core processors of to day and tomorrow threads will become a much more
important and useful tool in the programmers arsenal.
The main thing is that for a very long time programmers have had very little
real use for threads as most computers had a single core with 1 or in later
years 2 threads per core. That simple fact has convinced many programmers
that threads are not all that they are hyped up to be. After all you are
still waiting on the hardware to become available and if you run more then 2
threads at a time the time the processor needs for switching between threads
removes many of the performance benefits threads could offer.
Now though that one sees processors that can handle up to 32 threads and
soon even 128 threads at a time multiple threads will prove to be a lot more
useful then most programmers have given them credit for. As the processor
does not need to do all the switching between threads and can simply
continue working on the thread without interruption the additional speed
that literally processing in parallel can bring will be very noticeable. The
main problem that developers will find in terms of raw performance is the
I/O of these systems which will not be able to provide data fast enough to
keep the threads from having to wait for data to become available.
Of course the old monster of how do you deal with multiple threads that do
need to exchange data will be back on the table and certainly if you are
doing funky stuff such as AI where many different inputs are processed at
the same time and all can and should to an extend influence each other then
you will spend many nights waking up screaming in the middle of the night as
you got tangled in the multiple thread nightmare again.

As for Perl the threads implementation on it offers is not to bad and quite
convenient for the simpler parallel run scenario. For serious performance in
the multi threaded arena Perl is simply not the tool for the job, it is as
simple as that.
But any developer that tells you that threads are not worth the trouble they
cause or anything along those lines is clearly an experienced programmer
who unfortunately has not realized yet that the computer world has
changed incredibly over the past few years.



2011/2/9 Shlomi Fish shlo...@iglu.org.il

 On Tuesday 08 Feb 2011 10:05:47 Dr.Ruud wrote:
  On 2011-02-07 11:30, Shlomi Fish wrote:
   Threads work pretty well in C, though they are extremely tricky to get
   right for non-trivial programs
 
  That they work pretty well is probably about user experience of some
  heavily interactive system. But only if the user accepts crashes and
  deadlocks now and then.
 

 What I meant by work pretty well, is that they are lightweight, cheap,
 perform
 well, and can often increase performance.

  The tricky to get right misses the point that systems based on threads
  have no proper math to back them up.
 

 [citation needed]. I believe I've seen some mathematical models of threads,
 and you can prove the lack of deadlocks/livelocks/etc. in a model of the
 system mathematically. But like you said, if one 

Re: Need Help with Perl Mobile Environment

2011-01-25 Thread Rob Coops
On Tue, Jan 25, 2011 at 8:46 AM, Khabza Mkhize khabza@gmail.com wrote:

 Can you please help with where to find information about Perl mobile
 version
 I need to develop  Accommodation Booking Website that will run on Smart
 phone, Iphone And Blackberry.

 Please send me a links with relevant information

 regards
 Khabazela
 Green IT Web
 http://www.greenitweb.co.za

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi Khabazela,

As you will not be running Perl on the mobile but on the server side you
should be fine.
The way it is normally done you use CGI for all your server side jobs such
as keeping track of sessions, entering data in databases or other content
stores, sending out mails etc, etc, etc. The end user gets to see a nice
page with html, css and javascript that does all the funky ajax stuff (which
calls your server side script written in perl) to spice things up a bit.

It is no different then creating something like this with PHP, VB, J2EE or
any other server side language. Even the techniques used to communicate
between the javascript on the customer side and the code on the server side
will be the same. So much so that you can use things like jQuery or GWT on
the client side and implement the server side in Perl without the user
noticing the difference.

There are lots of examples but looking at what you are building I would
advise having a look at a lot of the security related examples as well
things that explain how your side can be hacked sessions taken over and
information from users copied as most examples you will find online seem to
completely ignore the security aspect of building a website which in your
case could cause serious problems.

Regards,

Rob


Re: Capture nth element of a file

2011-01-21 Thread Rob Coops
On Fri, Jan 21, 2011 at 5:43 PM, jet speed speedj...@googlemail.com wrote:

 Hi All,
 I need some help with the blow.

 I have a an input file with the below data

 1835
 1836
 1837
 1838
 1839
 183A
 183B
 183C
 183D


 #!/usr/bin/perl
 use strict;
 use warnings;

 my $filename;
 $filename = input.txt ;
 open (FILE,  $filename ) or die Could not open $filename: $!;
 while (FILE) {
  chomp;
 print $_ \n;
 }

 I can successfully read the file with this, what i want to achive is to
 capture every 4 th element of the input file into a variable ex: $A.
 that would be
 1838
 183C

 Any help would be much appreciated.

 Regds
 Sj


Using a simple counter would do the trick...

#!/usr/bin/perl
use strict;
use warnings;

my $filename;
my $counter = 0;
my $A;
$filename = input.txt ;
open (FILE,  $filename ) or die Could not open $filename: $!;
while (FILE) {
 chomp;
 $counter++;
 next unless ($counter == 4);
 push @{$A}, $_ . \n;
}

use Data::Dumper;
print Dumper $A;


Re: Capture nth element of a file

2011-01-21 Thread Rob Coops
On Fri, Jan 21, 2011 at 5:57 PM, Rob Coops rco...@gmail.com wrote:



 On Fri, Jan 21, 2011 at 5:43 PM, jet speed speedj...@googlemail.comwrote:

 Hi All,
 I need some help with the blow.

 I have a an input file with the below data

 1835
 1836
 1837
 1838
 1839
 183A
 183B
 183C
 183D


 #!/usr/bin/perl
 use strict;
 use warnings;

 my $filename;
 $filename = input.txt ;
 open (FILE,  $filename ) or die Could not open $filename: $!;
 while (FILE) {
  chomp;
 print $_ \n;
 }

 I can successfully read the file with this, what i want to achive is to
 capture every 4 th element of the input file into a variable ex: $A.
 that would be
 1838
 183C

 Any help would be much appreciated.

 Regds
 Sj


 Using a simple counter would do the trick...

 #!/usr/bin/perl
 use strict;
 use warnings;

 my $filename;
 my $counter = 0;
 my $A;

 $filename = input.txt ;
 open (FILE,  $filename ) or die Could not open $filename: $!;
 while (FILE) {
  chomp;
  $counter++;
  next unless ($counter == 4);

$counter=0; # Of course you should not forget to reset your counter ;-)

  push @{$A}, $_ . \n;
 }

 use Data::Dumper;
 print Dumper $A;



Re: Strategy for diagnosing context-sensitive bug

2010-12-07 Thread Rob Coops
On Tue, Dec 7, 2010 at 2:21 PM, Rob Dixon rob.di...@gmx.com wrote:

 On 07/12/2010 09:24, Jonathan Pool wrote:


  Are you familiar with the perl debugger?


 Thanks much for your reply. I haven't used the debugger, partly
 because its documentation describes it as an interactive tool and
 it's not clear to me how that works in my context. The script is
 executed by httpd in response to a browser form submission, which
 includes a file upload.

  Alternatively you could take a look at the Tie::Watch module to do
 asimilar thing without using the debugger.


 I could look into using this. On the other hand, to get access to
 the  variable's value one can use warn to write to the error log, and
 yet
 that blocks the problem from occurring, so I can imagine that any other
 method that reads the value and writes it somewhere will do the same.

  I would like to see the Perl source to be able to be more helpful
 to you.


 The current script where the error occurs is at


 http://panlex.svn.sourceforge.net/viewvc/panlex/perl/plxu.cgi?revision=27view=markup

 The error occurs at line 1297.


 So the line in question is

  @res = (split /\n\n/, ($in{res} = (NmlML ($in{res}))), -1);

 and, although I can see no proper reason why it should make any difference
 in this case, I recommend removing the ampersand from the function call: it
 is bad practice in anything but very old Perl. I would also prefer to lose a
 few parentheses, purely for the sake of readability. So please try this:

  @res = split(/\n\n/, $in{res} = NmlML($in{res}), -1);

 Cheers,

 Rob


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/


 Hi Jonathan,

I have had situation with a very old perl (5.4 or something like that) where
on a certain line I would get an error thrown every single time, but if I
added a few lines to the script even empty lines the error would not occur
at all no mater what the data was that I put in. This was in combination
with Apache httpd, since the whole thing was hosted with the cheapest web
host available on the web at the time I dismissed this as a error in their
setup and didn't bother figuring it out. I mean if a few empty lines can
resolve the problem why bother spending a lot of time on identifying a
seemingly random error.

At the time I was fairly new to perl and was quite sure that my lack of
knowledge was the thing that caused the problem in the first place but now
that I see your problem description I am not so sure anymore... who knows
there might be certain cases where perl + apache + certain commands in
certain combinations cause these inexplicable problems to occur. Could you
just for fun add a few empty lines (say 50 or so) just before the line 1297
and see if you then get the same error on line 1347? If not then your
problem is 100% identical to the problem I was facing several years ago and
maybe just maybe it was not my lack of perl knowledge or the bad hosting
provider that was at fault.

The biggest problem I have with CGI scripts is that they are so hard to
debug, it really comes down to pushing data to the error log or some other
file you are using to keep track of what is going on in the application, but
simply stepping trough the script and seeing where the problem comes from is
not an option which I feel is a really big gap in a developers options to
properly resolve reported bug in an application.

Regards,

Rob


Re: Read logfile to variable hash

2010-12-01 Thread Rob Coops
On Wed, Dec 1, 2010 at 10:24 AM, Dennis Jakobi roec...@googlemail.comwrote:

 Hi there,

 I have the following problem:

 I want to read a logfile in which every line follows this rule:
 value1:value2:value3...

 But the number of values differs. Sometimes a line has 2 values (the
 minimum) and sometimes 3 or more values. Now I want to push these
 values into a hash that follows that form:
 $hash-{value1}-{value2}...

 This is what I have so far:

 sub readLog {
my $self = shift;
my $logfile = getFilename();

open LOG, $logfile;

foreach my $line (LOG) {
my @values = split /\:/, $line;
... # at this point I don't know what to do :(
}
 }

 Greetings
 Dennis

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi Dennis

I know a weird question that you likely already thought about but still why
are you not using a combination of array and hash?

$hash-{value1} = [ value2, value3, value4, ... ]

It would save you a lot of headaches in looping over all values that are on
that one line because how are you going to easily handle a situation where
you have a hash of hashes with an undefined depth without having to jump to
a lot of difficult hoops to make it work properly?

Regards,

Rob


Re: finding duplicates in txt file

2010-11-10 Thread Rob Coops
On Wed, Nov 10, 2010 at 1:58 PM, Natalie Conte n...@sanger.ac.uk wrote:

 HI,

  I have to find triplicates ProbeName $arg[8]  in a txt file and
 calculate a mean of their associated valueProbe $args[13]

 At the end my txt file has to contain unique probeName and valueProbe .

 This si my beginning of a script to pull out the probename and
 valueProbe information , I must say after that I don't know where to
 start to get duplicate and mean.

 use strict;

 use warnings;

 my @files = glob('/file/*sed91.txt');



 foreach my $file (@files) {

open (INFILE, $file) or die line15 $!;

open (OUTPUT, $file._output.txt) or die line16 $!;

while (my $line= INFILE){

my @args =split (/\t/, $line);

print OUTPUT $args[8] . \t . $args[13] .\n;

}

close OUTPUT;

close INFILE;

 }



 Thanks

 Nat




 --
  The Wellcome Trust Sanger Institute is operated by Genome Research
  Limited, a charity registered in England with number 1021457 and a
  company registered in England with number 2742969, whose registered
  office is 215 Euston Road, London, NW1 2BE.


Hi Natalie,

You managed to get the list of name and number, so far so good now the next
step.

First of all write on a piece of paper how you would do this same task when
presented with the list you are printing to OUTPUT.
I think you would so somehting like this:
Write doen all the unique names
For each unique name write down all the numbers associated with that name
Take all the numbers and calculate the mean value

So why not do exactly that:
Make a hash with all the names.
So now lets go over this list for each key you find any numbers associated
with this key (the key being your unique name) and add this to an array
which we save as the value of this key.
Now you should have an array that has as KEY - VALUE pairs: Unique name,
Array of values
Calculating the mean value given a list of numbers should be easy enough.
Of course you can compress this to a but less steps when programming but the
general idea is the same.

So your program woudl look like this.

my %final_hash;
while (my $line= INFILE) {
  my @args =split (/\t/, $line);
  print OUTPUT $args[8] . \t . $args[13] .\n;
  if ( ! defined $final_hash{ $args[8] } ) {
# If the name has not been seen yet we assign it an array with one
value.
$final_hash{ $args[8] } = [ $args[13], ];
  } else {
# If the name already exists then we simply add to the array we already
have.
$final_hash{ $args[8] } = push @{ $final_array{ $args[8] } }, $args[13];
  }
}

# Now you calculate the mean value
foreach my $name ( keys %final_hash ) {
 my @list_of_values = $final_hash{ $name };
 my $mean;
 ...
 # Up to you ;-)
 ...
 print OUTPUT $name . \t . $mean . \n;
}

That should pretty much do the trick I would think...

Regards,

Rob Coops


Re: using Net::SMTP unable to send out a mail

2010-11-04 Thread Rob Coops
On Thu, Nov 4, 2010 at 9:50 AM, Agnello George agnello.dso...@gmail.comwrote:

 HI

  i wrote a small simple script using Net::smtp however my local MTA is not
 accepting the mail here is the script :


 =

 #!/usr/bin/perl

 use strict;
 use warnings;
 use Net::SMTP;
 use Getopt::Long;

 my $from = '' || 'sys...@server1.com' ;
 my $sub = ''  || 'this is a testmail';
 my $content = '' || 'this is a data';
 my $to;
 my $relayhost = '' || 'localhost';


 GetOptions(
  'from|f=s' = \$from  ,
  'to|t=s' = \$to,
  'sub|s=s' = \$sub ,
  'content|c=s' = \$content,
  'relayhost|h=s' = \$relayhost  );

 die 'usage: sendmemail.pl--to n...@email.com'  unless( $to );


 my $smtp = Net::SMTP-new($relayhost,
  Debug   = 1,
);
 $smtp-mail($from);
 $smtp-to($to);
 $smtp-datasend(Subject: $sub);
 $smtp-datasend(\n);
 $smtp-datasend($content\n);
 $smtp-dataend();
 $smtp-quit();


 

 i am getting the following error while executing the script

 [r...@localhost scripts]# perl  sendmemail.pl -t agnello.dso...@gmail.com
 Net::SMTP Net::SMTP(2.31)
 Net::SMTP   Net::Cmd(2.29)
 Net::SMTP Exporter(5.62)
 Net::SMTP   IO::Socket::INET(1.31)
 Net::SMTP IO::Socket(1.30_01)
 Net::SMTP   IO::Handle(1.27)
 Net::SMTP=GLOB(0x8fc86ac) 220 localhost.localdomain ESMTP Postfix
 Net::SMTP=GLOB(0x8fc86ac) EHLO localhost.localdomain
 Net::SMTP=GLOB(0x8fc86ac) 250-localhost.localdomain
 Net::SMTP=GLOB(0x8fc86ac) 250-PIPELINING
 Net::SMTP=GLOB(0x8fc86ac) 250-SIZE 1024
 Net::SMTP=GLOB(0x8fc86ac) 250-VRFY
 Net::SMTP=GLOB(0x8fc86ac) 250-ETRN
 Net::SMTP=GLOB(0x8fc86ac) 250-ENHANCEDSTATUSCODES
 Net::SMTP=GLOB(0x8fc86ac) 250-8BITMIME
 Net::SMTP=GLOB(0x8fc86ac) 250 DSN
 Net::SMTP=GLOB(0x8fc86ac) MAIL FROM:sys...@server1.com
 Net::SMTP=GLOB(0x8fc86ac) 250 2.1.0 Ok
 Net::SMTP=GLOB(0x8fc86ac) RCPT TO:agnello.dso...@gmail.com
 Net::SMTP=GLOB(0x8fc86ac) 250 2.1.5 Ok
 Net::SMTP=GLOB(0x8fc86ac) Subject: this is a testmail
 Net::SMTP=GLOB(0x8fc86ac) this is a data
 Net::SMTP=GLOB(0x8fc86ac) .
 Net::SMTP=GLOB(0x8fc86ac) 221 2.7.0 Error: I can break rules, too.
 Goodbye.
 Net::SMTP=GLOB(0x8fc86ac) QUIT
 Net::SMTP: Unexpected EOF on command channel at sendmemail.pl line 34



 your help will be of much value .

 Thanks

 --
 Regards
 Agnello D'souza


Hi Agnello,

Maybe have a short look at the following document:
http://www.answersthatwork.com/Download_Area/ATW_Library/Networking/Network__3-SMTP_Server_Status_Codes_and_SMTP_Error_Codes.pdf

And then in particulair the following section:

*SMTP Error 221 : The server is ending the mail session –*
*it is closing the conversation with the ISP as it has no more*
*mail to send in this sending session.*
*SMTP Status 221 is often misconstrued as an error*
*condition, when it is in fact nothing of the sort. The mail*
*server is simply telling you that it has processed everything*
*it was given in this particular session, and it is now going*
*back into waiting mode.*
*Because SMTP status 221 is often misinterpreted, with*
*some mail servers the Network Administrators have*
*changed the default text of SMTP Reply 221 to something*
*more meaningful and less alarming. For example, a typical*
*SMTP reply 221 might say “221 Goodbye” or*
*“221 Closing connection”, or the most irritating one we’ve*
*seen “221 Bye”, Arrrgghh – can you blame anyone for*
*thinking there might be a problem ? Of course not ! So*
*some Network Administrators are these days being quite*
*imaginative by changing the default text of SMTP reply 221*
*to more user friendly messages like : “221 Thank you for*
*your business” (I love that one!), or “221 All messages*
*processed successfully in this session, SMTP connection*
*is closing”.*

It looks to me like all is working fine as far as the sending of mail
goes... the only reason you might not recieve the email I can think of is
that your SMTP server would not be able to deliver the mail to the mailbox
of the recieving party because it does not have access to the mailbox or
because it cannot reach another mail relay that will be able to get the
message delivered. In short you SMTP server configuration might need a tweak
but as far as I can tell you are doign everything right on the perl end.

Regards,

Rob


Re: Extract Javascript

2010-10-16 Thread Rob Coops
On Thu, Oct 14, 2010 at 7:48 PM, bones288 bkneab...@gmail.com wrote:

 what should I look into for extracting javascript from a webpage:

 http://www.costofwar.com/

 I want to periodically check the counter but I'll I can see is:

 script type=text/javascript src=/js/costofwar/costofwar-main.js/
 script

 So I don't 'see' the changing counter number.

 Any ideas?


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



No need for perl here is is just a very simple java script that is included
by using the relative path to the file to include. In short the full script
can be found at: http://www.costofwar.com/js/costofwar/costofwar-main.js Have
a look at a basic explanation about javascript and you will see how this
woks or just click on the link in this email ;-)

Regards,

Rob


Re: split log file

2010-10-16 Thread Rob Coops
On Thu, Oct 14, 2010 at 4:54 PM, yo RO lyn...@gmail.com wrote:

 Hello I need to split a log file per days
 I have a file in txt format and I want to create a file with all data
 from one day in one file
 I will give example

 I have this imput
 3_21_2010;11:12\\trafic info
 3_21_2010;11:34\\trafic info
 3_21_2010;13:21\\trafic info
 3_22_2010;11:12\\trafic info
 3_22_2010;11:34\\trafic info
 3_22_2010;13:21\\trafic info
 3_22_2010;11:12\\trafic info
 3_23_2010;11:34\\trafic info
 3_23_2010;13:21\\trafic info
 3_23_2010;13:21\\trafic info
 3_24_2010;11:12\\trafic info
 3_24_2010;11:34\\trafic info
 3_24_2010;13:21\\trafic info

 I want to have in output 3 file
 file 1: name  3_21_2010.log
 contain
 3_21_2010;11:12\\trafic info
 3_21_2010;11:34\\trafic info
 3_21_2010;13:21\\trafic info

 file 2 : 3_22_2010.log
 contain

 3_22_2010;11:12\\trafic info
 3_22_2010;11:34\\trafic info
 3_22_2010;13:21\\trafic info
 3_22_2010;11:12\\trafic info

 and like this
 please help help me with this


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



This is the trick I would use.

   1. Read the directory containing the files and make an array of all files
   matching your file naming convention (use a regex for the check of the
   filename convention)
   2. Now loop over the array in step one and make a hash using the
   month_day notation as the key and an array of filenames matching this
   month_day notation as the value
   3. For each key in the array create a new file and open all files in the
   value array of that key and write their contents to the new file just
   created. (of course you will have to sort your array first to make sure that
   the logs are opened in the right order)
   4. Optionally remove the old logs by simply unlinking them

You could of course combine step 1 and 2 into a single pass the same goes
for step 3 and 4 but breaking things down in small steps first and then once
you get your head around it working out how to do this more efficient is
usually easier to do.

This sounds to much like a home work assignment to write it all out for you,
if it is I am sure that most of this you have already dealt with in class it
is just the combination of all the different elements that is the challenge.
If it is not then I am sure that by looking up the basic steps one at a time
(Read directory, regular expression, create hash, create/read file and
unlink are the terms you want to search for) you will be able to figure it
out.

Regards,

Rob


Re: perl real life exercise

2010-10-15 Thread Rob Coops
On Fri, Oct 15, 2010 at 9:02 AM, Agnello George agnello.dso...@gmail.comwrote:

 HI

 Is there any site or any book other than learning perl where i would get
 real life exercises on perl scripting.

 Thanks

 --
 Regards
 Agnello D'souza


Hi Angello,

What is your real life? I mean there are great books about graphics with
perl and a few about algorithms I have seen a couple about bioinformatics,
web development AJAX and I am sure that there is a lot more stuff out there
for specific fields of real life application.
First of all you will need to figure out what it is that you will want to do
with perl in real life, then I am sure that you will find there is a few
books on that particular subject.

Personally I found the best way to get real life experience is by simply
finding a challenge in real life and trying to use perl to overcome it. If
you have sorted that out find a new one and solve it, as you go a long you
will find much better ways to solve the challenges you struggled with
originally. Of course finding modules on CPAN as suggested before and
looking at how they deal with a real life problem can be very interesting
but also certainly for a beginner very confusing as these modules often have
been written by people with many years of experience and especially the
better known and usually more used once have been written over a period of
several years refining and improving the code along the way, so don't
be discouraged if you end up scratching your head for a while trying to make
sense of these things.

Online tutorials usually range from bad to real bad and often do not bother
with fundamental things like strict and warnings, let alone for webpage
examples using taint and attempting to even at a very high level protect
against SQL injection and other very simplistic attacks. So if you do
anything only look at the initial challenge and the end result but please do
not use the code as it usually is just horrible.
There are a few good sites though, http://perl.com is an obvious location
for very good articles about perl and its uses in real life. Another fun
location is http://perlmonks.com where you can find people showing their
solutions to real life problems or asking the community for help overcoming
them, the site is often visited by some of the smartest people in the perl
world so it certainly is a good place to ask questions and to try and answer
those of others.

I hope that helps a bit, just keep in mind coding perl is above all about
having fun.

Regards,

Rob


Re: Out of Memory!

2010-10-13 Thread Rob Coops
On Wed, Oct 13, 2010 at 8:42 AM, Panda-X exilepa...@gmail.com wrote:

 Hi List,

 My script is running on WinXP, Dos shell, which is a server program.

 It works fine, unless it will pops-up a Out of Memory! and stop running
 per each few days!

 Could someone tell me this error message is coming from Perl ? or from the
 Dos shell ?

 Thanks!
 ex


Good question, most likely a program on your system is eating memory and not
letting go. Over time you will run out of memory and you will get this
message. The problem might be your script or well, anything else on your
system that is running at the same time.
Even though I do not doubt your programming skills I would suggest a simple
test. Don't run your script for a few days and see if you get the same
error. If so you are sure that it is not caused by your script, if you don't
get the error then your script is the problem and you have some fun
debugging to do.

Normally perl is quite good in the cleaning up of unused resources memory or
otherwise, but as a developer you can make it very hard for perl to keep the
memory clean. You might be building an array, a hash or some other data
element that never stops growing, or that you copy data from A to B to C and
never really release any of them etc. Specially scripts that are supposed to
run constantly will need you as a developer to pay special attention to
keeping the memory foot print within reason...

It might be a good idea to post the script to this list and ask people to
help you out, often a mistake like this is not hard to spot and usually
takes a few minor changes to help perl keep the memory footprint nice and
small.

I'm sure that there are debug options in perl that can help you with this as
well I have never ended up using them as the few scripts that I did write
that had to live forever didn't cause me any memory problems (a few other
non-memory related once though) so I have no idea how to debug stuff like
that with perl.

Regards,

Rob


Re: Out of Memory!

2010-10-13 Thread Rob Coops
On Wed, Oct 13, 2010 at 10:41 AM, Panda-X exilepa...@gmail.com wrote:



 2010/10/13 Rob Coops rco...@gmail.com



 On Wed, Oct 13, 2010 at 8:42 AM, Panda-X exilepa...@gmail.com wrote:

 Hi List,

 My script is running on WinXP, Dos shell, which is a server program.

 It works fine, unless it will pops-up a Out of Memory! and stop running
 per each few days!

 Could someone tell me this error message is coming from Perl ? or from
 the
 Dos shell ?

 Thanks!
 ex


 Good question, most likely a program on your system is eating memory and
 not letting go. Over time you will run out of memory and you will get this
 message. The problem might be your script or well, anything else on your
 system that is running at the same time.
 Even though I do not doubt your programming skills I would suggest a
 simple test. Don't run your script for a few days and see if you get the
 same error. If so you are sure that it is not caused by your script, if you
 don't get the error then your script is the problem and you have some fun
 debugging to do.

 Normally perl is quite good in the cleaning up of unused resources memory
 or otherwise, but as a developer you can make it very hard for perl to keep
 the memory clean. You might be building an array, a hash or some other data
 element that never stops growing, or that you copy data from A to B to C and
 never really release any of them etc. Specially scripts that are supposed to
 run constantly will need you as a developer to pay special attention to
 keeping the memory foot print within reason...

 It might be a good idea to post the script to this list and ask people to
 help you out, often a mistake like this is not hard to spot and usually
 takes a few minor changes to help perl keep the memory footprint nice and
 small.

 I'm sure that there are debug options in perl that can help you with this
 as well I have never ended up using them as the few scripts that I did write
 that had to live forever didn't cause me any memory problems (a few other
 non-memory related once though) so I have no idea how to debug stuff like
 that with perl.

 Regards,

 Rob



 Thank you very much!

 The server script I am using is as a chat server, It carries message from
 one user to another.
 When the target user got their messages, the [array] will be deleted from
 the HoH. ( see below )

 I've turned on strict and warnings, as well $| = 1; everything besides
 the struct below, and the daemon object, everything is my $localvars.

 Overall to say, I have a heavy struct like this :

 $MESG = {
  usr1 = {
[$From, $to, $Time, $Context],
[$From, $to, $Time, $Context],
...
  },
  usr2 = {
[],


[],
...
  }
 }

 In the script, I am using modules :
 IO::Socket::INET::Daemon;
 Devel::Size qw(total_size);

 Then, I have this sub to check the struct ( an relative size )

 sub DumpSize {
 my $total_sz = 0 ;
 foreach my $usr ( sort keys %$MESG ) {
 my $size = 0 ;
 print $usr: MESG= ;
 print scalar ( @{$MESG-{$usr}} ) ;
 print  , LEN=;
 foreach ( @{$MESG-{$usr}} ) {
 $size += length $_ ;
 $total_sz += length $_ ;
 }print $size , TTZ=$total_sz , MEM=;
 print total_size $MESG;
 print  TRF=;
 print $COUNT;
 print $/;
 }

 print $/ END $_[0] ;
 print TimeStamp ( time ) ;
 print  ==$/$/;
 }

 This sub will turns out something like this :

  END GET 2010/10/13 16:23:09 ==

 usr1: MESG=2 , LEN=535 , TTZ=535 , MEM=1853 TRF=7223
 usr2: MESG=1 , LEN=175 , TTZ=710 , MEM=1853 TRF=7223
 usr3: MESG=2 , LEN=339 , TTZ=1049 , MEM=1853 TRF=7223

  END GET 2010/10/13 16:23:27 ==

 usr2: MESG=2 , LEN=535 , TTZ=535 , MEM=1275 TRF=7224
 usr3: MESG=1 , LEN=175 , TTZ=710 , MEM=1275 TRF=7224

  END GET 2010/10/13 16:24:43 ==

 usr2: MESG=1 , LEN=171 , TTZ=171 , MEM=2200 TRF=7227
 usr3: MESG=2 , LEN=535 , TTZ=706 , MEM=2200 TRF=7227
 usr5: MESG=2 , LEN=346 , TTZ=1052 , MEM=2200 TRF=7227
 usr6: MESG=1 , LEN=171 , TTZ=1223 , MEM=2200 TRF=7227

 This sub will be called every time a user Send or Get a mesg.
 MESG is how much messages in the inbox of the user
 LEN is the Totel text length for the user,
 TTZ is the total text length in server,
 MEM is the $MESG struct size
 TRF is how many rounds of Get / Send happened.

 From my observation, the Out of Memory error could happen anytime, where
 the MEM size was never over 10K.

 Any clues ?


So let me get this right have one structure $MESG that stores all messages
for all users for as long as they have not fetched the messages... so if a
user sends 1 message your structure will contain one line with usr1 = { [ ]
} and if this same user sends another message $message will contain usr1 =
{ [ ], [ ] } and so on.
I assume that once a message is retrieved the message is removed from the
$MESG structure, if not that is an very

Re: Problem with foreach loop

2010-09-27 Thread Rob Coops
On Mon, Sep 27, 2010 at 9:19 AM, HACKER Nora nora.hac...@stgkk.at wrote:

 Hello list,

 Could someone please explain why this test script:

 my @arr1 = qw(one two three);
 my @arr2 = qw(1 2 3);

 foreach my $arr1 ( @arr1 ) {
print Arr1: $arr1\n;
foreach my $arr2 ( @arr2 ) {
print Arr2: $arr2\n;
if ( $arr2 eq '2' ) {
shift @arr1;
}
}
 }

 produces that result:

 oracle:/opt/data/magna/wartung/work/nora ./test.pl
 Arr1: one
 Arr2: 1
 Arr2: 2
 Arr2: 3
 Arr1: three
 Arr2: 1
 Arr2: 2
 Arr2: 3

 whereas I had expected the output to be like this:

 oracle:/opt/data/magna/wartung/work/nora ./test.pl
 Arr1: one
 Arr2: 1
 Arr2: 2
 Arr2: 3
 Arr1: two   # why not?
 Arr2: 1 # why not?
 Arr2: 2 # why not?
 Arr1: three
 Arr2: 1
 Arr2: 2
 Arr2: 3

 Thanks in advance!

 Regards,
 Nora





Ok, for starters your output is different from the above listed example. The
actual output is:

Arr1: one
Arr2: 1
Arr2: 2
Arr1: two
Arr2: 2
Arr1: three
Arr2: 3

Which makes a lot of sense, you after all start you loop printing Arr1[0]
then in that same loop you print Arr2[0] and Arr2[1] at which point you
shift Arr2; and leave the inner loop, now you are at the end of your main
loop so you print Arr[1] and again start a new inner loop printing Arr[0]
(which thanks to your shift now is equal to 2 and because this is the case
your if statement again shifts Arr2 and ends the inner loop, back in the
main loop there is nothing more to do so we start the loop again this time
printing Arr1[2] and we enter the inner loop printing Arr2[0] which now is
3 so the if statement never becomes true and we exit the inner loop, the
main loop has come to the end of the array as well so we exit that and the
code ends.

There is nothing weird about what is happening here. I think that you are
using a few different scripts to test different things and by now your
output and your actual script do not match which causes most of the
confusion.
I found the best way to actually get your head around the most complex loops
is to write/draw them out and get an actual picture of what is happening. If
you look at most modelling technique this is exactly what they do, it is the
best way for a humans brain to process what is going on. ;-)

Anyway to get the result you want you should simply pop your second array
when you encounter a 3 and in that case of course not print...

foreach my $arr1 ( @arr1 ) {
   print Arr1: $arr1\n;
   foreach my $arr2 ( @arr2 ) {
   if ( $arr2 eq '3' ) {
   pop @arr2;
   } else {
   print Arr2: $arr2\n;
  }
   }
}

That will result in:

Arr1: one
Arr2: 1
Arr2: 2
Arr1: two
Arr2: 1
Arr2: 2
Arr1: three
Arr2: 1
Arr2: 2

Regards,

Rob


Re: Can't call method get_value....

2010-08-05 Thread Rob Coops
On Thu, Aug 5, 2010 at 9:52 AM, sync jian...@gmail.com wrote:

 Hi, guys:

 i have a perl script that supposed to add users to  ldap . when i run the
 script it get:

 Can't call method get_value on an undefined value at ./add_user.pl



 Any help will be appreciated?


 The  following message is my  perl script message   and my perl version is
 5.8.8 on CentOS 5.4 x86_64.

 t...@xxx ~: cat add_user.pl

 --
 #!/usr/bin/perl

 use strict;

 use Net::LDAP;


 die Usage is adduser.pl [username] [realname]\n if length(@ARGV) != 1;

 my $username = $ARGV[0];
 my $realname = $ARGV[1];

 my $ldap = Net::LDAP-new('localhost');
 my $mesg = $ldap-bind;
 my $mesg = $ldap-search(
  base = ou=People,dc=example,dc=com,
  filter = (uid=$username),
  );
 $mesg-code  die $mesg-error;


 my $searchResults = $mesg-count;
 die Error! Username already exists! unless $searchResults == 0;

 #print $searchResults;

 $mesg = $ldap-search(
  base = ou=People,dc=example,dc=com,
  attrs = ['uidNumber'],
  );

 my @entries = $mesg-sorted('uidNumber');
 my $entry = pop @entries;

 my $newuid = $entry-get_value( 'uidNumber');
 $newuid++;

 my $result = $ldap-add (uid=$username,ou=People,dc=example,dc=com,
  attr = [ 'cn'  = $realname,
  'uid' = $username,
  'uidNumber'   = $newuid,
  'mail'= '$usern...@example.com',
  'homeDirectory'   = '/home/$username',
  'objectclass' = ['person',
 'inetOrgPerson', 'posixAccount']
  ]

  );


 $mesg = $ldap-unbind;



Ok, so the line that is causing this error is: my $newuid =
$entry-get_value( 'uidNumber');
Which is the only get_value call you make. This must mean that entry is
simply empty or at least doesn't contain what you expected it to contain.

What you are doing is first a search for anything with uidNumber in
the: ou=People,dc=example,dc=com
base. Rather then checking for error values or even verifying that anything
actually was returned you instantly call: my @entries =
$mesg-sorted('uidNumber');

For all we know @entries might be completely empty.

I would add some checking here and there, first check to see that there
where no error's after your search. Then make sure that at least some
results was returned. If this is the case then sort your search results and
then after all of that, take one entry and first of all dump it all to
STDOUT or to a log file (for debugging such a little bit of code STDOUT
should de just fine. If that looks like an entry that you expected to see
then try and call get_value and I can promise you that it will work just
fine.

Regards,

Rob


Re: add newline

2010-08-03 Thread Rob Coops
On Tue, Aug 3, 2010 at 12:26 PM, Sharan Basappa sharan.basa...@gmail.comwrote:

 In my program, I am building a text file f that also contains newlines
 irst into an array.
 I push every line to the array, but how do I add new lines to this text?

 Regards,
 Sharan

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi Sharan,

There are several options (it is perl after all :-)

First you could when you are adding a value to the array also add the
newline character:
 push (@myarray, $value . \n);

Second you could of course add the line feeds as seperate array values:
 push (@myarray, $value);
 push (@myarray, \n);

Third you could of course when you are printing the values from the array
add the linefeeds:
 print join(\n, @myarray);
or
 foreach my $value ( @myarray ) {
  print $value . \n;
 or
  print $value;
  print \n;
 }

There are a few more options but these are the most common once you will see
around the perl world. It really depends on what you want to do with you
array. If you only use the array to print the output I would advise using
the join option as that is certainly the most efficient memory wise and at
least feeling wise the fastest way of working (though I have not verified
this) the second option is just silly and likely quite slow and memory
inefficient.

If you are using the array as a form of log file you might want to have a
look at 
log4perlhttp://search.cpan.org/~mschilli/Log-Log4perl-1.29/lib/Log/Log4perl.pmon
cpan. Which is a much more industrial strength solution then
reinventing
the wheel is likely to be.

Regards,

Rob


Re: Extract data from BEncoded .torrent files

2010-07-26 Thread Rob Coops
On Sun, Jul 25, 2010 at 9:24 PM, NisargaYoga steven.harn...@gmail.comwrote:

 Hi - Total newbie and my first post so please tell me if I'm doing
 anything wrong.

 My first real Perl project - looking for some conceptual guidance.

 The project involves processing of BEncoded .torrent files downloaded
 from an rtorrent seedbox.

 Bottom line is I want to cycle through about 150 files in a directory
 and extract two data pieces from each file: the name of the torrent
 file and another statistic. The data could be saved in .csv files.

 Specifically the object is to extract on a weekly basis the number of
 bytes that the seedbox has uploaded for each torrent (about 150 active
 torrents), in order to detect torrents that have minimal activity, so
 they can be deleted later (manually).

 All work will be done on either my PC or my Linux box (not on the
 hosted seedbox server).

 In rtorrent, active torrents are stored at /var/www2/
 rtorrent2/.rtsession/

 I downloaded all active .torrent files to my desktop PC via FTP.

 I can see the contents of each torrent using BEncode Editor. Besides
 the normal .torrent information, a torrent taken from the
 .rtsession directory has an additional node for data specific to its
 status in the seedbox.

 I don't see a way to attaching a sample file, but I saved one in this
 directory:

 http://www.nisargayoga.org/hidden/ the file name is my-
 downloaded-active.torrent

 When opened with BEncode Editor (before the nodes are expanded) the
 bottom node is labeled

 rtorrent (d)[23] has it [23] elements, I don't know what the
 (d) means

 When this node is expanded, the second node from the bottom is

 total_uploaded (i) =  212151318

  this is the statistic I need, plus the name, which is in an earlier
 node.

 The node with the name is info (d)[6] -- name (b)[77]

 So to recap, the project is to cycle through all BEncoded .torrent
 files in a directory, extract 2 pieces of data from each and write the
 output to a .csv file, one row per torrent, 2 data items per row.

 There must be a half-dozen ways to do this but you could help me by
 pointing me in the right direction. All suggestions welcome.

 Thanks very much.


 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi,

I have never attempted to work with BEncoded files but a simple search on
CPAN shows that other have (quite a few of them actually). I would advise
you to have a look at their work and if at all possible reuse the modules
they have made available for this type of work. After all it makes very
little sense to reinvent the wheel doesn't it?

So go to: http://search.cpan.org and look for BEncode.
The first module I found looks very prommising:
http://search.cpan.org/~aristotle/Bencode-1.4/lib/Bencode.pm it encodes and
decodes files to and from Perl hashs by the look of it (just looked at the
first example. I think that might just be what you are looking for. ;-)

Rob


Re: hash arrays

2010-07-26 Thread Rob Coops
On Mon, Jul 26, 2010 at 2:09 PM, Sharan Basappa sharan.basa...@gmail.comwrote:

 Folks,

 I am reusing a code for some enhancements. According to my
 understanding, it is a record with
 some unique string as key and then hash array as the value.

 I iterate through the array and print as follows:

 foreach my $key (keys %{$abc})
 {
  print $key ${$abc}{$key} \n;
 }

 I get values such like:
 val_0001 HASH(0x186c0060)
 val_0002 HASH(0x187ea490)
 val_0003 HASH(0x18655bc0)
 val_0004 HASH(0x1880fc60)

 Can someone tell me how to get the actual value instead of HASH* as above?

 Regards,
 Sharan

 --
 To unsubscribe, e-mail: beginners-unsubscr...@perl.org
 For additional commands, e-mail: beginners-h...@perl.org
 http://learn.perl.org/



Hi Sharan,

When you do: print $key ${$abc}{$key} you get this output: val_0001
HASH(0x186c0060) so that basically means that the value associated with:
val_0001 is a hash.
So if we do something along the lines of:

foreach my $key (keys %{$abc})
{
 my %hash = %{$abc}{$key};
 foreach my $key2 ( keys %value ) {
  print $key\t$key2\t$value{$key2}\n;
 }
}

You should get something like this:
val_0001   key_0001  value_0001
val_0001   key_0002  value_0002
val_0001   key_0003  value_0003
val_0002   key_0001  value_0001
etc

Of course you should decide if this is really needed if you are only
interested in finding out what is inside $abc for debugging purposes for
instance you might just want to make your life easy and simply use
Data::Dumper to push content to the screen.
For all you know that HASH(0x186c0060) might contain a value that is it self
an array which contains a list of hashes with arrays as values etc... for
just seeing what is in there it is often better to simply use Data::Dumper
as it has been written to deal with all those possibilities so you don't
have to worry about them.

Regards,

Rob


Re: Calling Exit from within a subroutine

2010-07-21 Thread Rob Coops
On Wed, Jul 21, 2010 at 5:00 AM, Babale Fongo bfo...@googlemail.com wrote:

 I call several subroutines within my program.  The program is design in a
 way that I don't expect my subroutines to return to the caller but to exit
 once they have executed.

 At the moment I don't have exit or return at the end of my subroutines.



 mysub(); # will the sub return here? If yes what happen if I don't capture
 it and act?

 

 .

 ...

 reset of the code



 sub mysub{

   do this

  don that

  ...

 # I don't call exit or return

 }





 I understand a sub will normally implicitly return the last thing it
 evaluation, but what happens if I don't act on what has been returned to
 the
 caller? Will be programme stall because it doesn't know what to do next or
 will it exit?



  Mimi

 Hi Mimi,

You program will happily continue doing what you want it doens't care that
you don't deal with the return value.

So if you have code like this:

do some stuff;
do some more;
call_a_sub();
do some other stuff;


Your program will execute the first two lines, then do what ever the sub
call_a_sub() wants it to do and return to do some other stuff without you
having to worry about a return value. Keep in mind though that $_ has likely
be altered by you sub. ;-)

There always good reasons not to bother with return values but in most
cases I would still say it is a sign of sloppy programming.
For the simple reason that you expect a sub to do something what ever it is.
If this something fails you should, even if your code doesn't care about the
result, inform the end user of this fact... you would not be the first
programmer that writes a program that seems to totally at random do this one
thing or fail to do this without any apparent reason... a simple: Failed to
execute something message written to STDERR if nothing else is just one of
those things that makes the end users life a million times easier.

Rob


Re: Compare file/dir size

2010-07-20 Thread Rob Coops
On Tue, Jul 20, 2010 at 1:51 PM, HACKER Nora nora.hac...@stgkk.at wrote:

 Hi,

 My aim is to compare two directories on different servers, whereas one
 is the FTP'ed copy of the other, by checking whether both directories
 are of exactly the same size. I tried to accomplish that with
 Filesys::DiskUsage, but unfortunately the two dirs are always of
 different size because the .. differs. Also, I couldn't find a way to
 restrict the 'du' only to certain files In the dir (e.g. du $dir/*.Z),
 obviously it works with dirs only...!?

 Any hints appreciated :-)

 Kind regards,
 Nora




Hi Nora,

My solution to this would be to simply open the directory and read the files
then using the individual files (easy enough to filter on extensions or what
ever else you like) compare the file sizes.
Opening a directory is as simple as opening a file:

opendir( handle, dir name) or die $!;
while( defined ( my $file = readdir handle ) {
 next unless $file =~ /^backup.*\.bak$/ # filter out all files that do not
start with backup and end in .bak
 my $filesize = -s $file;
  # Or if you are of the OO pursvation
 use File::stat;
 my $filesize = stat($file)-size;
}

Do the same with the other directory and compare the results (I would use a
hash for each directory with the filename as key and the size as value then
it is easy to compare the two and find all differences between the two
directories. ;-)

Regards,

Rob


Re: Perl stat script ... ???

2010-07-18 Thread Rob Coops
On Mon, Jul 19, 2010 at 7:50 AM, newbie01 perl newbie01.p...@gmail.comwrote:

 Hi all,

 Does anyone know if someone had written a Perl stat script that mimics the
 Linux' stat command?

 I want to be able to use the stat command on Solaris, unfortunately, I just
 found out that that command is not available on Solaris and HP-UX so am
 thinking a Perl script should be the way to go then instead so I can have
 it
 on any UNIX's flavours.

 Any suggestion will be very much appreciated.

 Thanks in advance.


I would say have a go at finding a module like this on cpan (
http://search.cpan.org) as I'm sure others have run into this problem
before. I have worked with both Solaris and HP-UX for a combined total of
nearly 10 years now and I have never really needed this. There are
alternative HP-UX and Solaris commands that offer
very similar functionality. I have to agree that stat is a nice solution but
don't be tricked into the typical Windows user trap when they switch to
*nix. You will have to learn to work with new tools and applications on the
new platform this is just the way life works ;-)

Also if you have install rights I'm quite sure that in a third party
repository you will be able to find stat for HP-UX/Solaris as it is as I
said before quite a handy tool.


  1   2   3   >