[on the lighter side] Jim Zemlin (Linux Foundation) presentation

2008-10-22 Thread Greg Rundlett
The Linux Foundation just wrapped up their first End User Summit in
New York (where all the big Wall Street firms use Linux mind you).
Jim Zemlin, the Executive Director of the Linux Foundation made this
presentation to kick things off...(nice sense of humor)

https://www.linuxfoundation.org/events/files/eus08/jim_zemlin_eus08.pdf


Greg
-- 
skype/aim/irc freephile
home office 978-225-8302
[EMAIL PROTECTED]
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: shell, perl, performance, parallelism, profiling, etc. (was: Upgrade guidance)

2008-10-22 Thread Bill Ricker
On Tue, Oct 21, 2008 at 6:51 PM, Ben Scott [EMAIL PROTECTED] wrote:
 Perl is poor at SMP (gah! perl threads!).
  I've never had to worry about Perl MP.  Sounds like I should be glad.  :-)

MP in any language is tricky, but sometimes it appears easy and bites
you later. Perl has tried a couple times to make it safe and easy, but
MP intrinsically isn't, so the original Perl Threads (5.005) design
was abandoned. The heavierweight interpreter threads (ithreads) used
for fork() emulation on MSWindows since 5.6 has been available as a
config choice when building Perl for nomal platforms since 5.8. IBM
ships /bin/perl with ithreads enabled on AIX. I am pleasantly
surprised none of my coworkers have hurt themselves with
raceconditions.

 Shell pipes are a simple coarse MP primitive that *is* safe, but at
the cost of spawning heavyweight processes and flowing through IPCs.

A shell pipe with sort in it won't be doing more than loading the
executable / spawning the process in parallel, since the sort won't
write until it's done reading.

Shell scripts with bunches of external commands in the inner loop will
speed up in Perl as, even with sticky execs, each external commands
line execution costs a process start.

7Perl scripts are much faster if one uses the built-in bulk processing
features (the Lisp or APL 'map-reduce' dialect of Perl)  rather than
writing the same thing in Perl loops and branches (the C dialect). But
it won't my knowledge make use of SIMD/SMP (although that has been
added to one of the Perl 6 prototypes).



-- 
Bill
[EMAIL PROTECTED] [EMAIL PROTECTED]
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: shell, perl, performance, parallelism, profiling, etc.

2008-10-22 Thread Jerry Feldman

On 10/21/2008 04:02 PM, Ben Scott wrote:

According to everything I've ever read, Linux ignores the sticky bit
on executables.  Wikipedia has a good summary:

http://en.wikipedia.org/wiki/Sticky_bit
  
The original reason for the sticky bit is because some Unix commands are 
so frequently used and small that keeping them in memory significantly 
improves performance, especially in a multi-user system.


I believe from my previous research on Linux virtual memory that 
commands will remain in virtual memory long enough to where the sticky 
bit is not needed. First of all the text (instructions and read-only 
data) are never copied to swap as they are mapped into VM from their 
physical location. older Unix systems used to copy the program to swap, 
then a newer technique was to load it into memory and mark it as dirty.  
Putting this in terms of the shell and Perl, because the shell (bash) is 
constantly being used, it is probably always resident in physical memory 
and never paged out. Perl, on the other hand may need to be loaded, 
causing a Perl script to appear to be slower than a shell script. 
Additionally, Perl compiles the script first. But, comparing a Perl 
script to a shell script is somewhat comparing apples and PCs. Perl 
incorporates the features of SED and AWK. So, if you were to create a 
shell script that used these features, you might find that Perl willl 
out perform BASH. Another was that you can look at performance is to use 
a C shell script on a system where the C Shell is not used as a login 
shell, so it is generally not resident.


--
Jerry Feldman [EMAIL PROTECTED]
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846




signature.asc
Description: OpenPGP digital signature
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Ethtool

2008-10-22 Thread bruce . labitt
CentOS5.2, ethtool 5, Intel PCI 1000Mbit ethernet card - running e1000 
driver.

The connection from my pc to the local gbit switch is autonegotiating to 
10Mbit :(
I tried using ethtool to change the connection speed and it seems either I 
am not doing it right (most likely), or something is wrong.

If I attach the NIC directly to my remote server it negotiates to 
1000Mbit, so I think the NIC is ok.  If I plug the remote server directly 
to the switch, it negotiates to 1000Mbit, so I would think the switch is 
ok. 

So how does one set the speed in ethtool?

# ethtool -s eth0 speed 1000 duplex full   ?

How come this does not work?  Wrong syntax?  Something else?  I've tried a 
bunch of cables, including ones that did support 1000Mbit.

-Bruce

**
Neither the footer nor anything else in this E-mail is intended to or 
constitutes an brelectronic signature and/or legally binding agreement in the 
absence of an brexpress statement or Autoliv policy and/or procedure to the 
contrary.brThis E-mail and any attachments hereto are Autoliv property and 
may contain legally brprivileged, confidential and/or proprietary 
information.brThe recipient of this E-mail is prohibited from distributing, 
copying, forwarding or in any way brdisseminating any material contained 
within this E-mail without prior written brpermission from the author. If you 
receive this E-mail in error, please brimmediately notify the author and 
delete this E-mail.  Autoliv disclaims all brresponsibility and liability for 
the consequences of any person who fails to brabide by the terms herein. br
**

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: shell, perl, performance, parallelism, profiling, etc. (was: Upgrade guidance)

2008-10-22 Thread Ben Scott
On Wed, Oct 22, 2008 at 9:39 AM, Bill Ricker [EMAIL PROTECTED] wrote:
 MP in any language is tricky ...

  I've done very little MP work, but that has been my impression.
Fundamental to the thinking of most programmers (myself included) is
that variables won't change unless the code writes to them.

 Perl has tried a couple times to make it safe and easy, but
 MP intrinsically isn't ...

  Ah, much like security.  :)

 A shell pipe with sort in it won't be doing more than loading the
 executable / spawning the process in parallel, since the sort won't
 write until it's done reading.

  Right.  The case-in-point is, perhaps, an unusual situation.  There
aren't many forks involved, just a handful of programs, and the two
big ones can (in theory) run concurrently on the same input data set.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


[GNHLUG] PySIG, October 2008 -- Sphix, unittest, more

2008-10-22 Thread Bill Sconce
PySIGManchester, NH  23 October 2008

 Sphinx, presented by Arc Riley
Kent's Korner: unittest



PySIG -- New Hampshire Python Special Interest Group
Amoskeag Business Incubator, Manchester, NH
23 October 2008 (4th Thursday)   7:00PM (Beginner's QA at 6:30PM)

The monthly meeting of PySIG, the NH Python Special Interest Group,
takes place on the fourth Thursday of the month, starting at 7:00 PM.

A beginners' session precedes, at 6:30 PM.  (Bring a Python question!)
Favorite-gotcha contest; a special Kent's Korner; Janet's cookies (JBD
safe!); and more this month.


 Sphinx, presented by Arc Riley
   A ReStructuredText-based markup framework
   
Sphinx is a tool that makes it easy to create intelligent and
beautiful documentation for Python projects, written by Georg
Brandl and licensed under the BSD license.

It was originally created to translate the new Python documentation,
but has now been cleaned up in the hope that it will be useful to
many other projects.

Although it is still under constant development, the following
features are already present, work fine and can be seen 'in 
action' in the Python docs:

* Output formats: HTML (including Windows HTML Help) and
  LaTeX, for printable PDF versions
* Extensive cross-references: semantic markup and automatic
  links for functions, classes, glossary terms and similar
  pieces of information
* Hierarchical structure: easy definition of a document
  tree, with automatic links to siblings, parents and children
* Automatic indices: general index as well as a module index
* Code handling: automatic highlighting
* Extensions: automatic testing of code snippets, inclusion
  of docstrings from Python modules, and more

Sphinx uses reStructuredText as its markup language, and many of
its strengths come from the power and straightforwardness of 
reStructuredText...
   
The major theme of Python 2.6 is preparing the migration path to 
Python 3.0, a major redesign of the language. Whenever possible, 
Python 2.6 incorporates new features and syntax from 3.0 while
remaining compatible with existing code by not removing older 
features or syntax. When it's not possible to do that, Python 2.6
tries to do what it can, adding compatibility functions in a 
future_builtins module and a -3 switch to warn about usages that
will become unsupported in 3.0.

http://sphinx.pocoo.org/
   

Plus:
---
o Our usual roundtable of introductions, happenings, announcements

o Data Types VII
- (Running PySIG joke  :)

o Gotcha contest
- Got a favorite gotcha?  Bring it and share...

And of course, milk,  cookies.
  This month: Janet's 1) glazed maple pecan (special ../me); 
  2) molasses ginger; 3) assorted meringues--if they don't fail
  
---
6:30   Beginners' QA
7:00   Welcome, Announcements - Bill  Ted  Alex
7:10   Cookies  Milk - Janet  Alex  (thanks, Alex!)
7:15   Favorite-gotcha contest
7:20   Impromptu lightning talk(s) - anyone

7:30   Kent's Korner -- unittest: Unit Testing in Python

8:00   Sphinx, ReST-based markup framework
 presented by Arc Riley

8:45   Open discussion; plans for next time
9:00~  Adjourn

___
About PySIG:
PySIG meetings are typically 10-20 people, around a large table
equipped with a projector and Internet hookups (wired and
wireless).  We encourage laptops and a hands-on seminar style.
The main meeting starts at 7 PM; officially we finish circa 9 PM.  
Everyone is welcome.  (Membership is anyone who has an interest
in the Python progamming language, whether on Microsoft systems
or Linux or OS X; or cell phones, mainframes, or space stations.
We have everyone  from object-oriented gurus to recovering COBOL
programmers.)  Tell your friends!

Beginners' session:
The half hour before the formal meeting (i.e., starting at 6:30PM)
we have a beginners' session.  Any Python question is welcome -- 
whoever asks the first question gets the half hour!  Questions are
equally welcome by mail beforehand (in which case we can announce
them) or at the meeting.  (As are all Python questions, anytime.)

Mailing list:
http://www.dlslug.org/mailman/listinfo/python-talk

About Python:
Python is a dynamic object-oriented programming language that
can be used for many kinds of software development. It offers 
strong support for integration with other 

Re: shell, perl, performance, parallelism, profiling, etc.

2008-10-22 Thread Ben Scott
On Wed, Oct 22, 2008 at 10:12 AM, Jerry Feldman [EMAIL PROTECTED] wrote:
 The original reason for the sticky bit is because some Unix commands are so
 frequently used and small that keeping them in memory significantly improves
 performance, especially in a multi-user system.

  Plus (from what I've been told), older Unix systems didn't always
have very sophisticated caching subsystems, and/or enough RAM to make
use of such.  So the sticky bit was a way to manually tell a system to
never unload an executable image.  Linux effectively makes that
decision automatically.

 I believe from my previous research on Linux virtual memory that commands
 will remain in virtual memory long enough to where the sticky bit is not
 needed.

  That correlates with what I read (long ago): Linux is entirely
demand-paged, and does not implicitly commit swap space.  The
executable file is mapped into virtual memory first.  Then the process
starts, and immediately triggers a page fault to load the code.  Pages
are not allocated in the swap file until the system needs to free up
RAM for more stuff.  That's one of the reasons the kernel
out-of-memory algorithms are of such interest.  (Again, I'm just
repeating received wisdom; this could be wrong.)

 Perl, on the other hand may need to be loaded, causing a Perl script\
 to appear to be slower than a shell script.

  I ran did several test runs in succession.  The goal is to get
everything cached in RAM for these trials.  (Artificial, to be sure,
but it's harder to normalize the effects of disk I/O.)  The first run
in a series was typically different.  I threw those results out.  The
results I have been reporting are the typical case, after everything
is cached.  It seems to be stable after that first run.  (Within +/-
0.1 seconds real.)

 Additionally, Perl compiles the script first.

  Hmmm, that's an interesting point.  Still, once compiled, Perl
scripts are supposed to run faster.  bash has to parse and interpret
everything as it goes.  In this case, the shell variant doesn't have
any looping, so I would expect it to be a wash.

  If there was any kind of loop in the shell script, I would expect
the Perl variant to be much faster at the task.

  Hmmm, I suppose another experiment would be to turn the shell
variant into a Perl script, without using any Perl constructs beyond
what the shell variant does.  I'll try that next.

 Another was that you can look at performance is to use a C shell script on
 a system where the C Shell is not used as a login shell, so it is
 generally not resident.

  csh/tcsh would be resident after the first run.  :-)

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RE: shell, perl, performance, parallelism, profiling, etc. (was: Upgrade guidance)

2008-10-22 Thread Michael Pelletier
I used multithreading to very good effect in a project I worked on a while
back - it was almost a textbook example, I reckon.

The code needed to scan every inode in each filesystem to handle UID and GID
changes, but when dealing with an NFS server, where the round-trip network
latency is added to each filesystem operation, the speed of processing was
unacceptably slow.

So, in the main thread I had the find-like code (FTS, as I recall, from
BSD) walking the directory tree, and for each filesystem element it
identified, I spawned a new thread which ran the stat() on the file, looked
up the UID  GID in the change table to check if a change was needed, then
performed the change if necessary.

This way, all the main thread had to wait on was NFS calls to walk the
filesystem by reading directory inodes, not any NFS stat or update calls.
Needless to say, the NFS client and server needed to be tuned as well to
allow for a large number of simultaneous pending client calls for best
performance.

Properly tuned, the performance was about an order of magnitude better for
NFS-mounted filesystems.

It was a very useful application for threading, though it was the first and
last time in my programming career where it was worth the trouble of
implementing it.

-Michael Pelletier.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ben Scott
Sent: Wednesday, October 22, 2008 11:09 AM
To: Greater NH Linux User Group
Subject: Re: shell, perl, performance, parallelism, profiling,etc. (was:
Upgrade guidance)

On Wed, Oct 22, 2008 at 9:39 AM, Bill Ricker [EMAIL PROTECTED] wrote:
 MP in any language is tricky ...

  I've done very little MP work, but that has been my impression.
Fundamental to the thinking of most programmers (myself included) is that
variables won't change unless the code writes to them.

 Perl has tried a couple times to make it safe and easy, but MP 
 intrinsically isn't ...

  Ah, much like security.  :)

 A shell pipe with sort in it won't be doing more than loading the 
 executable / spawning the process in parallel, since the sort won't 
 write until it's done reading.

  Right.  The case-in-point is, perhaps, an unusual situation.  There aren't
many forks involved, just a handful of programs, and the two big ones can
(in theory) run concurrently on the same input data set.

-- Ben
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Ethtool

2008-10-22 Thread Jerry Feldman

On 10/22/2008 10:31 AM, [EMAIL PROTECTED] wrote:
CentOS5.2, ethtool 5, Intel PCI 1000Mbit ethernet card - running e1000 
driver.


The connection from my pc to the local gbit switch is autonegotiating to 
10Mbit :(
I tried using ethtool to change the connection speed and it seems either I 
am not doing it right (most likely), or something is wrong.


If I attach the NIC directly to my remote server it negotiates to 
1000Mbit, so I think the NIC is ok.  If I plug the remote server directly 
to the switch, it negotiates to 1000Mbit, so I would think the switch is 
ok. 


So how does one set the speed in ethtool?

# ethtool -s eth0 speed 1000 duplex full   ?

How come this does not work?  Wrong syntax?  Something else?  I've tried a 
bunch of cables, including ones that did support 1000Mbit.



  
I think it is a hardware issue. Check both the switch and the cables. 
Could be with the switch. Most GB switches have colored lights to 
checkwhat speed each port is connected as. Is the uplink connection 
between the switch and the server running at 1000Mb. I assume your cable 
is ok because you do negotiate at 1000Mb  to the server. Nearly all my 
systems in the office here are Linux on 1 or 2 Netgear GB switches. My 
IA64 box at y desktop is showing 1000Mb with the e1000 driver. This is 
plugged into a wall outlet, and I believe all cables are either CAT 5E 
or CAT 6. I checked one of the servers, and it is ull 1000Mb using the 
tg3 driver.


A few years ago my desktop system at work (Digital Alpha) was connecting 
at 10Mb half duplex into a 100Mb switch. The problem was a bug in the 
switch firmware, but we were able to force 100Mb full duplex by setting 
the driver configuration in modules.conf. ethtool should be able to do 
this also.


The first thing I would do in your case is to check the cable. Most CAT 
5 cables can handle 1000Mb, but they are not guaranteed to and some 
cheap ones just don't have the extra wires. Try using a known good cable 
to the switch. Also test the switch with another system to make sure the 
port you are using on the switch negotiates at 100Mb with another computer.


Lastly, i you have a manage switch and have plugged in a recent version 
of XP or Vista, they have code that can detect the switch and upload 
Linux detection firmware onto the switch. That is why most of the 
laptops in my office are connected through another switch to prevent MS 
from hurting our Netgear switches :-)


--
Jerry Feldman [EMAIL PROTECTED]
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846




signature.asc
Description: OpenPGP digital signature
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Ethtool

2008-10-22 Thread Travis Roy
 Lastly, i you have a manage switch and have plugged in a recent version of
 XP or Vista, they have code that can detect the switch and upload Linux
 detection firmware onto the switch. That is why most of the laptops in my
 office are connected through another switch to prevent MS from hurting our
 Netgear switches :-)

I'm curious about this, do you have any more details?
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: shell, perl, performance, parallelism, profiling, etc.

2008-10-22 Thread Jerry Feldman

On 10/22/2008 11:34 AM, Ben Scott wrote:

On Wed, Oct 22, 2008 at 10:12 AM, Jerry Feldman [EMAIL PROTECTED] wrote:
  

The original reason for the sticky bit is because some Unix commands are so
frequently used and small that keeping them in memory significantly improves
performance, especially in a multi-user system.



  Plus (from what I've been told), older Unix systems didn't always
have very sophisticated caching subsystems, and/or enough RAM to make
use of such.  So the sticky bit was a way to manually tell a system to
never unload an executable image.  Linux effectively makes that
decision automatically.

  

I believe from my previous research on Linux virtual memory that commands
will remain in virtual memory long enough to where the sticky bit is not
needed.



  That correlates with what I read (long ago): Linux is entirely
demand-paged, and does not implicitly commit swap space.  The
executable file is mapped into virtual memory first.  Then the process
starts, and immediately triggers a page fault to load the code.  Pages
are not allocated in the swap file until the system needs to free up
RAM for more stuff.  That's one of the reasons the kernel
out-of-memory algorithms are of such interest.  (Again, I'm just
repeating received wisdom; this could be wrong.)

  
Unix did not have virtual memory until the early 1980s, and by that time 
each vendor had its own version. Most Unix VMs are demand paged using an 
LRU based page replacement algorithm, unlike Digital VMS which was based 
on working set size  algorithms.  In all cases you need hardware to 
support virtual memory which was not available to Unix in the 1970s 
although Unix is derived from Multics. The algorithms used by Sun's 
Unixes, Digital's and others Unix systems were all different. By the 
time Linux came out, the 386 chip had support for virtual memory with 
the LRU algorithm. The beauty of the way Linux and most Unix systems use 
VM is that read-only pages are not physically loaded until needed, and 
that data pages are not created until they are written to. And, (as has 
been the case or many years in Unix) multiple instances of the same 
executable share the same text segment. One might remember the Unix 
vfork(2) system call. Originally, the fork(2) call would clone the 
entire data segment (data abd .bss), but vfork(2) would implement a 
copy-on-write strategy. Since Linux has been virtual since inception, 
there is no difference between the vfork(2) and fork(2) system calls. 
The bottom line is that the Linux and some Unix systems have a very 
efficient virtual memory and only use the swap space when memory becomes 
tight.


Today, demand-paged virtual memory is the standard virtual memory 
implemented by most systems, but way back there were other methods of 
virtual memory, such as the older Burroughs systems.


--
Jerry Feldman [EMAIL PROTECTED]
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846




signature.asc
Description: OpenPGP digital signature
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: shell, perl, performance, parallelism, profiling, etc.

2008-10-22 Thread Jerry Feldman

On 10/22/2008 11:34 AM, Ben Scott wrote:

I ran did several test runs in succession. The goal is to get
everything cached in RAM for these trials.  (Artificial, to be sure,
but it's harder to normalize the effects of disk I/O.)  The first run
in a series was typically different.  I threw those results out.  The
results I have been reporting are the typical case, after everything
is cached.  It seems to be stable after that first run.  (Within +/-
0.1 seconds real.)

  

Additionally, Perl compiles the script first.



  Hmmm, that's an interesting point.  Still, once compiled, Perl
scripts are supposed to run faster.  bash has to parse and interpret
everything as it goes.  In this case, the shell variant doesn't have
any looping, so I would expect it to be a wash.

  If there was any kind of loop in the shell script, I would expect
the Perl variant to be much faster at the task.

  Hmmm, I suppose another experiment would be to turn the shell
variant into a Perl script, without using any Perl constructs beyond
what the shell variant does.  I'll try that next.

  

Another was that you can look at performance is to use a C shell script on
a system where the C Shell is not used as a login shell, so it is
generally not resident.



  csh/tcsh would be resident after the first run.  :-)
  
Yes, the csh/tcsh would be resident after the first run, but so would 
perl. Probably a reasonable benchmark would be to run the scripts 
several times in successtion, and throw out the first run. This would 
most likely compensate for the load time of the Perl compiler/interpreter.


Another thing is comparing a compiled program to an interpreted program. 
I have seem some cases where interpreted programs can be made to run 
faster than some compiled programs. I have actually seen that with 
FORTH, but the traditional wisdom that compiled programs generally run 
faster than interpreted programs holds true most of the time.


However, my thought is that if you are concerned with the performance of 
a shell or Perl script, then you have the wrong language, and should 
consider a more traditional compiled language. On the BLU server, we had 
a script that would convert mailman passwords to htpasswords. The shell 
script took many minuted (I think it was over 30 at the time) where I 
rewrote it in C++, and it took something like seconds. Scripts, whether 
written in BASH, CSH, or Perl are very useful, but they have their 
place. I've seen whole applications written entirely in scripts 
(Unixshell, Perl, DCL), but once performance becomes an issue then you 
really need a binary solution.


--
Jerry Feldman [EMAIL PROTECTED]
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846




signature.asc
Description: OpenPGP digital signature
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Ethtool

2008-10-22 Thread Jerry Feldman

On 10/22/2008 02:06 PM, Travis Roy wrote:

Lastly, i you have a manage switch and have plugged in a recent version of
XP or Vista, they have code that can detect the switch and upload Linux
detection firmware onto the switch. That is why most of the laptops in my
office are connected through another switch to prevent MS from hurting our
Netgear switches :-)



I'm curious about this, do you have any more details?
  

Maybe Alex Hewitt can provide the answer to this question :-)

--
Jerry Feldman [EMAIL PROTECTED]
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846




signature.asc
Description: OpenPGP digital signature
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: shell, perl, performance, parallelism, profiling, etc.

2008-10-22 Thread Kevin D. Clark

Jerry Feldman writes:

 However, my thought is that if you are concerned with the
 performance of a shell or Perl script, then you have the wrong
 language, and should consider a more traditional compiled
 language. On the BLU server, we had a script that would convert
 mailman passwords to htpasswords. The shell script took many minuted
 (I think it was over 30 at the time) where I rewrote it in C++, and
 it took something like seconds. Scripts, whether written in BASH,
 CSH, or Perl are very useful, but they have their place. I've seen
 whole applications written entirely in scripts (Unixshell, Perl,
 DCL), but once performance becomes an issue then you really need a
 binary solution.

I kind-of agree with this, but on the other hand, I'm sure you'll
agree with me when I state that before you throw out an interpreted
implementation of a program, you owe it to yourself to ensure that
you're at least using at least using a reasonably efficient
algorithm.

I.e. bubblesort written in Perl is going to get clobbered by shellsort
written in C.

Regards,

--kevin
-- 
GnuPG ID: B280F24EMeet me by the knuckles
alumni.unh.edu!kdcof the skinny-bone tree.
http://kdc-blog.blogspot.com/ -- Tom Waits
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Server issues baffling me...

2008-10-22 Thread Neil Joseph Schelly
I'm hoping someone can say they recognize this and that if I press 'Alt-p' or 
something, it will all go away.  I am not that optimistic, but I figured it's 
worth a try.

I have a server that I can turn on in our office, plugged into the wall, and 
it will work fine for days, weeks, whatever.  I have never had a problem 
there.  When I bring it to the datacenter, it won't finish booting before it 
starts to get hda DMA timeouts.  It's the errors I typically associate with a 
failed drive.  Without fail, the machine does it every time it's booted up in 
the datacenter.  And without exception again, it works fine in the office.

To replicate the error in the office, I've tried switching the IDE cables, 
running badblocks or other disk-thrashing sorts of programs like dd 
if=/dev/hda of=/dev/null many times.  It's run for over a week without any 
issue.  I've tried it with the network interface at full and half duplex.  I 
tried running the machine in a closed room that probably got up to about 
75-80 degrees or so in temperature.

To prevent the error in the datacenter, I've tried booting it with different 
kernels. I've disconnected the network cable so that it's just power and a 
serial console.  I also did just power and a monitor/keyboard.  No matter 
what I try, it never gets to finish the booting process, not even to 
single-user mode, before the timeouts start filling the screen.

Has anyone seen any behavior like this?  At this point, I don't even know 
where to look.  I can't imagine that there's actually an element of our 
office that provides a better environment for machines and the office power 
surely can't be any better than what's at the datacenter either.  No other 
machines are exhibiting this behavior.  The server in question had been 
running fine in the datacenter for months until this apparent disk failure 
occurred.  I replace the disk and it worked for another month.  I replaced 
that disk under warranty and the new one never booted up right.  I don't 
believe I've actually got 3 hard drive failures in a month's time, but I 
don't know what else to look at.

Help...
-N
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: shell, perl, performance, parallelism, profiling, etc.

2008-10-22 Thread Jerry Feldman

On 10/22/2008 03:49 PM, Kevin D. Clark wrote:

Jerry Feldman writes:

  

However, my thought is that if you are concerned with the
performance of a shell or Perl script, then you have the wrong
language, and should consider a more traditional compiled
language. On the BLU server, we had a script that would convert
mailman passwords to htpasswords. The shell script took many minuted
(I think it was over 30 at the time) where I rewrote it in C++, and
it took something like seconds. Scripts, whether written in BASH,
CSH, or Perl are very useful, but they have their place. I've seen
whole applications written entirely in scripts (Unixshell, Perl,
DCL), but once performance becomes an issue then you really need a
binary solution.



I kind-of agree with this, but on the other hand, I'm sure you'll
agree with me when I state that before you throw out an interpreted
implementation of a program, you owe it to yourself to ensure that
you're at least using at least using a reasonably efficient
algorithm.

I.e. bubblesort written in Perl is going to get clobbered by shellsort
written in C.
  
Bubblesort is one of the slowest sorting algorithms in any language. 
Shell sort is not all that good either, that is why we have quicksort. 
Algorithms are very important regardless of programming language, and 
data structures tie directly into this.


--
Jerry Feldman [EMAIL PROTECTED]
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846




signature.asc
Description: OpenPGP digital signature
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Server issues baffling me...

2008-10-22 Thread Bruce Dawson
Neil Joseph Schelly wrote:
 I'm hoping someone can say they recognize this and that if I press 'Alt-p' or 
 something, it will all go away.  I am not that optimistic, but I figured it's 
 worth a try.

 I have a server that I can turn on in our office, plugged into the wall, and 
 it will work fine for days, weeks, whatever.  I have never had a problem 
 there.  When I bring it to the datacenter, it won't finish booting before it 
 starts to get hda DMA timeouts.  It's the errors I typically associate with a 
 failed drive.  Without fail, the machine does it every time it's booted up in 
 the datacenter.  And without exception again, it works fine in the office.

 To replicate the error in the office, I've tried switching the IDE cables, 
 running badblocks or other disk-thrashing sorts of programs like dd 
 if=/dev/hda of=/dev/null many times.  It's run for over a week without any 
 issue.  I've tried it with the network interface at full and half duplex.  I 
 tried running the machine in a closed room that probably got up to about 
 75-80 degrees or so in temperature.

 To prevent the error in the datacenter, I've tried booting it with different 
 kernels. I've disconnected the network cable so that it's just power and a 
 serial console.  I also did just power and a monitor/keyboard.  No matter 
 what I try, it never gets to finish the booting process, not even to 
 single-user mode, before the timeouts start filling the screen.

 Has anyone seen any behavior like this?  At this point, I don't even know 
 where to look.  I can't imagine that there's actually an element of our 
 office that provides a better environment for machines and the office power 
 surely can't be any better than what's at the datacenter either.  No other 
 machines are exhibiting this behavior.  The server in question had been 
 running fine in the datacenter for months until this apparent disk failure 
 occurred.  I replace the disk and it worked for another month.  I replaced 
 that disk under warranty and the new one never booted up right.  I don't 
 believe I've actually got 3 hard drive failures in a month's time, but I 
 don't know what else to look at.

 Help...
 -N

   
Many years ago I had a similar problem, but the system had another drive
in it, and I just used that one and gave up on using the bad drive. Many
moons afterward (1-3 years; not sure how long, I just remember it was
long enough for me to forget about the bad disk) the power supply died.
I replaced it. And then both drives started working.

You may want to put the drive on an unused pigtail from the power supply
before actually swapping out the power supply.

Your office probably has different power from the data center.

Also, if possible, use an oscilloscope to check the power at both places
and see if there are obvious differences. Some data centers use
purified power, which seem to cause problems for some systems
(switching power supplies?), but works fine for server class systems.
Unfortunately, I wasn't at that client long enough to chase down the
problem (not that they wanted a programmer fooling with a 'scope in a
production environment anyway).

--Bruce
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


RE: shell, perl, performance, parallelism, profiling, etc.

2008-10-22 Thread Roger M. Levasseur
 
One might remember the Unix
 vfork(2) system call. Originally, the fork(2) call would clone the 
 entire data segment (data abd .bss), but vfork(2) would implement a 
 copy-on-write strategy. Since Linux has been virtual since inception, 
 there is no difference between the vfork(2) and fork(2) system calls.

I recall reading that the BSD developers originally intended
fork(2) to do a copy-on-write method, but a micro-code bug in the
VAX 11/750 prevented that from working, so they implemented vfork(2)
as a way to get around it.  That vfork(2) didn't do copy-on-write,
but suspended the parent process until the child either did an exec
or an exit.  In fact, the vfork(2) (4.2BSD) man page says that the
call would be eliminated when proper system sharing mechanisms are
implemented.

I've read elsewhere that others didn't believe that the firmware bug
existed.

 -roger



___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Server issues baffling me...

2008-10-22 Thread Dan Jenkins
Neil Joseph Schelly wrote:
 I have a server that I can turn on in our office, plugged into the 
 wall, and it will work fine for days, weeks, whatever.  I have never 
 had a problem there.  When I bring it to the datacenter, it won't 
 finish booting before it starts to get hda DMA timeouts.  It's the 
 errors I typically associate with a failed drive.  Without fail, the 
 machine does it every time it's booted up in the datacenter.  And 
 without exception again, it works fine in the office.

Is the visiting server connected to the exact same power source as the
datacenter equipment? Is the monitor/serial console plugged into the
same power as the computer?

Years ago, we had a client with an odd problem, where the system became
intermittently unstable, and the hard drive sometimes failed, when the
equipment was moved into another room. We discovered there was a 400
volt difference in ground between the outlet into which the computer was
sometimes plugged and another outlet used by a peripheral, which was
occasionally connected to the computer, when the computer was in that
room. When the peripheral was connected and turned on and the computer
was plugged into the other outlet, the computer became unstable and
ate hard drives. It took a lot of visits for our tech to find that one.

We had a vaguely similar problem once with network cables with differing 
ground voltages.


___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Server issues baffling me...

2008-10-22 Thread Greg Rundlett
Not server related, but a related tale.  We repaired our broken vacuum
cleaner by replacing the motor (which definitely was starting to burn
out).  The funny part was that we returned for warranty service a
couple times because the vacuum would not work when we brought it
home.  The problem turned out to be that the one easily accessible
outlet in the living room was no longer working properly.



On Wed, Oct 22, 2008 at 11:55 PM, Dan Jenkins [EMAIL PROTECTED] wrote:
 Neil Joseph Schelly wrote:
 I have a server that I can turn on in our office, plugged into the
 wall, and it will work fine for days, weeks, whatever.  I have never
 had a problem there.  When I bring it to the datacenter, it won't
 finish booting before it starts to get hda DMA timeouts.  It's the
 errors I typically associate with a failed drive.  Without fail, the
 machine does it every time it's booted up in the datacenter.  And
 without exception again, it works fine in the office.

 Is the visiting server connected to the exact same power source as the
 datacenter equipment? Is the monitor/serial console plugged into the
 same power as the computer?

 Years ago, we had a client with an odd problem, where the system became
 intermittently unstable, and the hard drive sometimes failed, when the
 equipment was moved into another room. We discovered there was a 400
 volt difference in ground between the outlet into which the computer was
 sometimes plugged and another outlet used by a peripheral, which was
 occasionally connected to the computer, when the computer was in that
 room. When the peripheral was connected and turned on and the computer
 was plugged into the other outlet, the computer became unstable and
 ate hard drives. It took a lot of visits for our tech to find that one.

 We had a vaguely similar problem once with network cables with differing
 ground voltages.


 ___
 gnhlug-discuss mailing list
 gnhlug-discuss@mail.gnhlug.org
 http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/




-- 
visit http://freephile.com today
skype/aim/irc/twitter freephile
home office 978-225-8302
[EMAIL PROTECTED]
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/