Now that I have your attention (grin) what I'd like is something like:
download -host MYMVS -user MYUSER -password MYPASS file
'MYUSER.PDF.CNTL(member)' $EDITOR file upload -host MYMVS
-user MYUSER -password MYPASS file 'MYUSER.PDS.CNTL(member)'
Will this work for ya John? Let me know if ya
On Tue, Jul 29, 2003 at 02:51:40AM -0500, Lucius, Leland wrote:
Will this work for ya John? Let me know if ya have questions.
I must confess. I did this just for you. I never intended to use it. But,
doggone it, I really like it. Heck, if I start to get used to it, I might
even modify
I have SuSE Linux SLES7 S390 with kernel timer patch installed under zVM
4.3. z800. I need to install DB2 UDB V7.2 to support LDAP.
I started the DB2 install with ./db2setup and receive the following error
message immediately.
./db2inst: error while loading shared libraries:
First I would recommend using IBM DB2 8.1. I was able to get around this
error by doing the following:
ln -s /usr/lib/libstdc++-libc6.2-2.so.3 /usr/lib/libstdc++-libc6.1-2.so.3
I then executed the rpm installs with the following options
rpm -ihv --nodeps
Hope that helps.
Otherwise I believe
Eric,
Based on the level of LDAP needed to be deployed DB2 V8 is not an option.
The LDAP is part of TAM 3.9. It is only supported on DB2 UDB V7.
In your response I enter a link statement to point the library to what DB2
V7 requires.
I am installing DB2 V7 using the script ./db2setup, what rpm
I installed TAM 4.1 and that is why I have UDB V8. Ahh the
requirements
Anyhow, I installed the following RPMs instead of using the db2inst
script.
IBM_db2sp81-8.1.0-0
IBM_db2icuc81-8.1.0-0
IBM_db2jdbc81-8.1.0-0
IBM_db2crte81-8.1.0-0
IBM_db2conn81-8.1.0-0
IBM_db2rte81-8.1.0-0
Terry,
On the SLES7 CD there should be a package called compat. It contains the
correct version of the library that DB2 requires.
On Tuesday 29 July 2003 07:14 am, you wrote:
I installed TAM 4.1 and that is why I have UDB V8. Ahh the
requirements
Anyhow, I installed the following RPMs
On Tue, 29 Jul 2003, Eric Sammons wrote:
I installed TAM 4.1 and that is why I have UDB V8. Ahh the
requirements
Anyhow, I installed the following RPMs instead of using the db2inst
script.
IBM_db2sp81-8.1.0-0
IBM_db2icuc81-8.1.0-0
IBM_db2jdbc81-8.1.0-0
IBM_db2crte81-8.1.0-0
-Original Message-
From: John Summerfield [mailto:[EMAIL PROTECTED]
Sent: Monday, July 28, 2003 6:08 PM
To: [EMAIL PROTECTED]
Subject: Re: Stripping trailing blanks?
On Mon, 28 Jul 2003, McKown, John wrote:
snip
I invoke it in a subdirectory with:
for i in *;do ../nonum.sh
We're testing with Websphere under RH 7.2 Linux running under z/VM.
When a test process is invoked and it goes into a CPU loop, the only
option I can see to recover is to do a #CP IPL. This will eventually
result in a corrupted HFS. Any suggestions on how to better manage
process loops? During
On Tue, 29 Jul 2003, McKown, John wrote:
-Original Message-
From: John Summerfield [mailto:[EMAIL PROTECTED]
Sent: Monday, July 28, 2003 6:08 PM
To: [EMAIL PROTECTED]
Subject: Re: Stripping trailing blanks?
On Mon, 28 Jul 2003, McKown, John wrote:
snip
I invoke it
Eric,
The link did the trick. I entered the link and used ./db2setup and
everything installed no problem.
I was also installing HTTP from WAS V5 with Fixpak 1 on another SuSE SLES7
which worked ok for the install.
When I tried to do ./apachectl start I received the same error as in the
DB2 UDB
Go Aussies!
http://www.theregister.com/content/61/31910.html
http://www.theinquirer.net/?article=10743
Certainly, SCO has succeeded in making lots of very smart people extremely angry.
This isn't
a great strategy in almost any situation.
But aside from a few shills for proprietary
We recently encountered an application loop with a Linux Websphere instance.
I was able to get a VM PER Branch trace, but I cannot find any command
within Linux to display those memory locations or to determine where modules
are actually loaded. The code does not have any 'eyecatchers' either, so
/proc/pid/maps will show you where the executable and shared libraries are
loaded. You can use the nm command to display the offsets within the shared
library and executable for each of the entry points and then do the
relocation to work out where in storage the entry points are. (However, if
the
Terry,
I wanted to point something out, it is my understanding that in fact TAM
3.9 is not certified on zLinux. Are you aware of that as well or have you
heard otherwise?
Thanks!
Eric Sammons
FRIT - Infrastructure Engineering
Terry Spaulding [EMAIL PROTECTED]
Sent by: Linux on 390 Port
We've had an incident like this with a customer. It turned out to be a
garbage collection loop, storage for a very large object was required and
there wasn't enough room in the heap for it. It turns out to look like a run
away database query that is returning a 100MB+ result set.
On Tuesday 29
I had a similar problem to this. I did not need to do a POR to fix it.
What you need to do is from the HMC toggle the chpid for the device offline
to the linux LPAR.
On EVERY OTHER os/390 or z/os LPAR, vary the devices for that OSA-E
offline, and then confgiure the chipid offline to each LPAR.
In the same situation I mentioned earlier, we changed the priority of the
application server processes to be lower than any telnet process, so that
telnet users could still gain access in the event of a loop. The application
server priority can be changed from the admin app.
On Tuesday 29 July
On Tuesday, 07/29/2003 at 09:29 EST, James Melin
[EMAIL PROTECTED] wrote:
I was told that varying the OSA CHP
offline to all using LPARS will cause it to reload it's configuration or
something
Yes, that is true. Once an OSA chpid (the whole chpid, not just all the
devices on it!) is offline
Thanks.
--- Rich Smrcina [EMAIL PROTECTED] wrote:
In the same situation I mentioned earlier, we changed the priority of
the
application server processes to be lower than any telnet process, so
that
telnet users could still gain access in the event of a loop. The
application
server priority
I'm not sure it will work. One gotcha:
# PRE=cut -b 1-72 | sed -e s/\ \*\$//
If this were't remmed out, you probably have had a
non-functioning script.
Hmmm, did you actually try the script? Did you look further down or did you
just stop right at that line and assume you knew there was a
H,
Looking at all of these easy to remember ways to strip trailing blanks
reminds me why I like VM/CMS and PIPES. So instead of one of the incredibly
convoluted and unfriendly commands like this:
ncftpget -W $GX -d $W/get.$$ -a -c -u $U -p $P $H $F\($mbr\) | (eval $PRE)
$W/$mbr
I can
I haven't found any item that is not already covered in the
below announcement. You may test and report any mainframe bugs
you find.
greetings,
Florian La Roche
- Forwarded message from [EMAIL PROTECTED] -
From: [EMAIL PROTECTED]
Subject: Announcing Red Hat Enterprise Linux 3
This isn't the same thing. Somebody had to write the STRIP command for VM.
And the STRIP command only does that one thing. The Unix/Linux cut|sed is a
more general facility that took much less programming effort than what you
have to do under VM to get the same facilities.
You didn't show how
pipe name type | ftp ftp://userid:[EMAIL PROTECTED]/place_to_put_it (If I
put it as the 1st stage I can FTP to VM.)
PIPE stages use exactly the same philosophy of most UNIX commands. Do one
thing and do it well. Then put all these little stages together to make it
do interesting stuff. Unlike
I had a look at the ebay prototype and it was, well,
less than moving. What they they is a fibre cable
going into a switch, then dozens of cables going to
dozens of web serves in intel boxes in racks, then
dozens of cables going to a switch to a single fibre
to a data base server.
So, with web
pipe name type | ftp
ftp://userid:[EMAIL PROTECTED]/place_to_put_it
(If I put it as the 1st stage I can FTP to VM.)
But, can you:
ftp ... | post processing
(Personally, I'd rather this didn't turn into a harping match on the
benefits of either piping method.)
Leland
Philosophical question?
The heart of the matter lies in why so many images in the first place?
If I need a half dozen images of Linux to service the Web, but those
Linux images can all be running under VM, what is different between
Linux and VM that lets VM handle the concurrent workload better
You mean something like:
PIPE ftp ftpspec | strip | xlate from 437 to 1047 | spec w5.3 1 | sort |
postproc file a
Yep.
No harping match. You use the tool(s): that works, that you are comfortable
with. Sometimes you don't know what a tool is capable of.
-Original Message-
But, can you:
-Original Message-
From: Ward, Garry [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 11:34 AM
To: [EMAIL PROTECTED]
Subject: Re: Whither consolidation and what then?
Philosophical question?
The heart of the matter lies in why so many images in the first place?
If I need a
My take on multiple images is two fold.
But first, the disclaimer:
This assumes you have sufficient resources in the first place to do
this (normally real memory).
1. I don't know this to be true with Linux, but the Unix types have
always been leary of having multiple applications running on
At one time I did a lot of work with Unix, and I never had any problems with
multiple processes corrupting the memory of other processes. Have there
been some bugs introduced into Unix recently? I have not been working with
Unix for a couple of years, unless you count z/OS USS.
On the other
What happens then? You still have dozens of copies of
Linux running in dozens of EC machines. And they're
talking to each other via TCP/IP stacks over a number
of high speed connections. Have you really advanced
the architecture and capabilities of Linux?
Yes, this is a fabulous question
I'll be at the IBM booth helping anawer zSeries
questions next week (Tuesday and Wednesday).
Who all will be there?
=
Jim Sibley
Implementor of Linux on zSeries in the beautiful Silicon Valley
Computer are useless.They can only give answers. Pablo Picasso
__
I'll be working various booths.
On Tuesday 29 July 2003 01:22 pm, you wrote:
I'll be at the IBM booth helping anawer zSeries
questions next week (Tuesday and Wednesday).
Who all will be there?
=
Jim Sibley
Implementor of Linux on zSeries in the beautiful Silicon Valley
Computer are
On Maw, 2003-07-29 at 19:10, Fargusson.Alan wrote:
At one time I did a lot of work with Unix, and I never had any problems with
multiple processes corrupting the memory of other processes. Have there
been some bugs introduced into Unix recently?
Not that I've noticed. Multiuser has gone out
On Tue, 2003-07-29 at 13:30, Alan Cox wrote:
You can run 100 sessions on a 390 but I don't think you get the
equivalent of 300Ghz of CPU power.
Of course you don't. But you might well get enough CPU to keep your
users happy, depending on what they're doing.
Also of course, the dirty little
Which gets into the client and server question.
The server should be grinding data, not generating graphics. Graphics
are presentation and should be the responsiblity of the workstation
(client). digesting the data that is the basis of the graphics is the
server's business, which is going to
Alan wrote:
You can run 100 sessions on a 390 but I don't think
you get the equivalent of 300Ghz of CPU power.
With the new TREXX, you're probably talking 20-30Ghz,
assuming 1.2 Ghz engines x 32.
One of the driving factors of either the multiple
virtual machines or the multiple user model is
Herve,
You're right, everything does look OK. Perhaps bringing z/VM up to a more
current maintenance level will help?
Mark Post
-Original Message-
From: Herve Bonvin [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 12:44 AM
To: [EMAIL PROTECTED]
Subject: AW: SuSE SLES8 64 bits
On a side light to this topic, I remember an article I read in the late
80's early 90's where someone wrote some 'randomly poke storage'
programs. Then started them running under different platforms. As I
remember it, there was some mainframe environment (I forget which), Win NT
3.??, OS/2, Win
Hello from Gregg C Levine
Phil, everything you've been saying about those characters at SCO, is
exactly appropriate. The reason why you firm wasn't interviewed, is
that they may not know of it. Besides, SCO wants positive data that
supports their unsupportable position, not a statement that'll
We recently moved a java app and some MQ clients and servers to
linux/390 for testing. Folks here are used to Solaris and are confused
by the number of processes that show up when you issue 'ps -ef'. Many
more than they are used to. If you ask Jeeves, there is info on the
threading model linux
On Tue, 2003-07-29 at 15:37, Ann Smith wrote:
We recently moved a java app and some MQ clients and servers to
linux/390 for testing. Folks here are used to Solaris and are confused
by the number of processes that show up when you issue 'ps -ef'. Many
more than they are used to. If you ask
New threads and processes are both created via a call to clone(). The former
uses flags that tell clone not to duplicate everything (like virtual
memory). The new thread or process gets a unique PID. In the 2.6 there'll be
something called a process group ID and a new threading model known as
On Tue, 29 Jul 2003, Tom Duerbusch wrote:
My take on multiple images is two fold.
But first, the disclaimer:
This assumes you have sufficient resources in the first place to do
this (normally real memory).
1. I don't know this to be true with Linux, but the Unix types have
always been
I think that the reason that the threads don't show up in ps on Solaris that
'lightweight' processes are implemented in the library at user level. The kernel does
not know about them. This was the case a one time anyway.
This disadvantage of this is that if any thread goes compute bound for a
I've seen this behaviour, too. I once tried to move a large number of
mp3 files from one physical drive to another with rsync, and the machine
locked up, destroyed the reiserfs file systems on both drives, and I
lost a bunch of files. That's the only time I've had a near catastrophic
failure in
Quite right. I would think, that once you have an reliable production
application running, you would just leave it alone. When you get the
next release of that application, you would put it on a current level of
Linux. And then kill off the old application and old level of Linux.
That is easy
On Maw, 2003-07-29 at 22:16, Fargusson.Alan wrote:
I think that the reason that the threads don't show up in ps on Solaris that
'lightweight' processes are implemented in the library at user level. The kernel
does not know about them. This was the case a one time anyway.
Modern solaris
On Maw, 2003-07-29 at 20:35, Jim Sibley wrote:
One of the driving factors of either the multiple
virtual machines or the multiple user model is that,
in most applications, most of the time, a single user
is idle and your 300Ghz of power is mostly idle.
But in the PC world cpu power is cheap.
On Maw, 2003-07-29 at 19:53, Adam Thornton wrote:
Reading email shouldn't take much CPU, although if you insist on doing
it inside UltraWhizzy
K/Gnome/Mozilla/MultiMediaMailReaderNowWithGratuitousAnimation!!! then
it can find a way, I'm sure, to burn CPU.
Even that is mostly RAM and I/O heavy
On Maw, 2003-07-29 at 20:49, Dale Strickler wrote:
Does anyone know of anyone doing this sort of research now? Anyone running
this or other crash tests like this on Linux (on or off the MVS environment?)
It is simple code to write, just generate two random numbers, treat one as
an address
On Tue, 29 Jul 2003, Ferguson, Neale wrote:
New threads and processes are both created via a call to clone(). The former
uses flags that tell clone not to duplicate everything (like virtual
memory). The new thread or process gets a unique PID. In the 2.6 there'll be
something called a process
On Tue, 29 Jul 2003, Michael Martin wrote:
I've seen this behaviour, too. I once tried to move a large number of
mp3 files from one physical drive to another with rsync, and the machine
locked up, destroyed the reiserfs file systems on both drives, and I
lost a bunch of files. That's the only
On Tue, 29 Jul 2003, Alan Cox wrote:
On Maw, 2003-07-29 at 19:53, Adam Thornton wrote:
Reading email shouldn't take much CPU, although if you insist on doing
it inside UltraWhizzy
K/Gnome/Mozilla/MultiMediaMailReaderNowWithGratuitousAnimation!!! then
it can find a way, I'm sure, to burn
Alan Cox wrote:
crashme is part of the Linux cerberus test suite although it goes back
many years before. Roughly speaking crashme does this
Catch every exception
Generate random data
Execute it
(catching the exception to repeat)
Its found many things, including
Alan wrote:
Its just that PC's are so cheap its
easier to use several for a job _IFF_ you can solve
the management
problem.
That _IFF_ is not only non-trivial technically, but
also not not-trivial financially!
You but one cheap PC or a hundred cheap PC's, you
still have a bunch of cheap PC's.
On Tue, 29 Jul 2003, Jim Sibley wrote:
Alan wrote:
Its just that PC's are so cheap its
easier to use several for a job _IFF_ you can solve
the management
problem.
That _IFF_ is not only non-trivial technically, but
also not not-trivial financially!
You but one cheap PC or a hundred
On Tuesday, 07/29/2003 at 08:55 MST, Jim Sibley [EMAIL PROTECTED]
wrote:
So my question is: What moves are afoot to reduce the
number of required images by consolidating their
functions and remove the TCP/IP communications between
applications?
Isn't this the next logical step?
You make
61 matches
Mail list logo