Re: Expanding disk space.

2013-08-20 Thread Tom Duerbusch
Mark:

Sorry for the delay.  I sent this Sunday.  Today I find out my posts to 
LINUX-390 haven't been going thru because I haven't subscribed.

So I think my DROID has a different email address (outgoing), however I get all 
my emails that I use to get.

So, now I'm off subscribing to all the listservs thru my Droid.

Anyway, following is a repost of my comment:

Tom Duerbusch
THD Consulting




What I gathered from the original post was not being able to apply maintenance 
due to being out of disk space.  Well if the failure is caused by memory/swap 
filling up, increase virtual memory instead of adding a real 3390 to swap for 
temporary problems.

I run many small, very small Linux guests that I do have to increase virtual 
storage when I apply maintenance.  The smallest zLinux machine I ever got real 
work out of was a 24 MB machine using IUCV for IP connectivity to VM's TCPIP 
machine.  It was just a router.  Currently, my smallest machines are 48 MB (ftp 
servers), and you just can't apply maintenance anymore (yast with connection to 
a vswitch) in a 48 MB machine .  Memory is cheap, until you have to buy it.

Tom Duerbusch
THD Consulting

Sent from my Verizon Wireless 4G LTE DROID


Mark Post  wrote:



>>> On 8/16/2013 at 11:40 AM, "Duerbusch, Tom"  
>>> wrote: 
> I have also occasionally had that kind of problem.
> 
> I'm under VM, so I IPL the guest and specify another 512 MB of virtual
> storage.
> Run my maintenance.
> Then reipl with the normal amount of memory for that guest.

Just curious as to what this has to do with adding disk space?


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

>>> On 8/16/2013 at 11:40 AM, "Duerbusch, Tom"  
>>> wrote: 
> I have also occasionally had that kind of problem.
> 
> I'm under VM, so I IPL the guest and specify another 512 MB of virtual
> storage.
> Run my maintenance.
> Then reipl with the normal amount of memory for that guest.

Just curious as to what this has to do with adding disk space?


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: DB2 Connect client, any sense in running server?

2012-06-21 Thread Tom Duerbusch
We converted from standalone client copies to DB2 Connect Server under zLinux.

Reasons:

1.  Cost.  The standlone copy was about $450 per.  The server copy for 25 Named 
user was around $2,250.  A Named user is per user and cannot be shared.  A 25 
pack for concurrent users costs much more.  Another varient is a cost per 
engine based on your application server or web server size.  Then there is the 
enterprize wide license.

2.  I have multiple DB2 Connect servers running.  One is production which has 
been up for more then 2 years.  A fall back server in case the production 
server crashes (just change what you connect to).  And a test one and MY test 
one.

3.  If I add another DB2 (VSE), I just need to add it to DB2 Connect Server.  
Many PC based product will then see the new server when you open the connection.

4.  The "thin client" on the pc is much smaller in disk space and PC memory 
usage.

5.  Putting it, or really "them" on zLinux, is what zLinux is all about.  
Server Consolidation.  Takes about 1-2% CPU.

However, you have already paid for your current copies.  We went with the 
server method and redistribute the standalone copies to certain users that we 
couldn't talk into the server method.

Tom Duerbusch
THD Consulting

Sent via BlackBerry by AT&T

-Original Message-
From: Tom Ambros 
Sender: Linux on 390 Port 
Date: Thu, 21 Jun 2012 10:43:16 
To: 
Reply-To: Linux on 390 Port 
Subject: DB2 Connect client, any sense in running server?

We're running the DB2 Connect client at the various distributed machines
that require it.  Is there any sense in running the DB2 Connect server
product in a Linux on System Z guest to serve the other guests or the
distributed machines removing the client from those distributed devices?

I am of the impression that the only purpose for the server, at this
point, is to perform two-factor commit under certain circumstances that we
do not encounter here.

Do the advantages of running the client wash out when Linux on System Z
for a set of guests becomes the configuration?

Thank you for sharing your experience and advice.

Thomas Ambros
Operating Systems and Connectivity Engineering
518-436-6433

This communication may contain privileged and/or confidential information. It 
is intended solely for the use of the addressee. If you are not the intended 
recipient, you are strictly prohibited from disclosing, copying, distributing 
or using any of this information. If you received this communication in error, 
please contact the sender immediately and destroy the material in its entirety, 
whether electronic or hard copy. This communication may contain nonpublic 
personal information about consumers subject to the restrictions of the 
Gramm-Leach-Bliley Act. You may not directly or indirectly reuse or redisclose 
such information for any purpose other than to provide the services for which 
you are receiving the information.

127 Public Square, Cleveland, OH 44114
If you prefer not to receive future e-mail offers for products or services from 
Key
send an e-mail to mailto:dnereque...@key.com with 'No Promotional E-mails' in 
the
SUBJECT line.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Cannot add drive to existing LVM SLES 11 SP 1

2012-03-14 Thread Tom Duerbusch
Well, I found my problem.

The yast panels are quite a bit different than in previous versions and it was 
not intutive on what to do.

Anyway:

Partitioner:
Volume Management
Tab to /dev/LVM and hit enter.
You now have the Logical Volumes display with a resize option.  Don't do it.
Tab backward to Overview and Resize there.
That is where you get the ability to add/remove volumes from the LVM pool.

On prior versions, there was a LVM option under System, that took you right to 
the panel where you can, optionally, add/remove volumes.

Just have to know the secret handshake and all is well .

Thanks for everyones help.

Tom Duerbusch
THD Consulting

>>> Marcy Cortes  3/14/2012 3:54 PM >>>
You shouldn't have unount to resize. We increase all the time with both 10 and 
11. 

The command is different between the 2 though.


Marcy.  Sent from my BlackBerry. 


- Original Message -
From: Scott Rohling [mailto:scott.rohl...@gmail.com] 
Sent: Wednesday, March 14, 2012 03:43 PM
To: LINUX-390@VM.MARIST.EDU 
Subject: Re: [LINUX-390] Cannot add drive to existing LVM SLES 11 SP 1

just realized that the lvextend command I showed should be:   lvextend -L
+2G /dev/testvg/testlv

And - if the error is about resize - then it's likely because the
filesystem is mounted and you will need to unmount it.   If yast got that
far, I'm not sure what the state of your volume group is - so you may want
to do a vgdisplay -v to see if the device did get added to the vg..

Scott rohling

On Wed, Mar 14, 2012 at 11:42 AM, Scott Rohling wrote:

> It's not too difficult to do this on the command line:
>
> lsdasd  and figure out what the /dev/dasd device is - let's say it's dasdx
>
> format it:   dasdfmt -b 4096 /dev/dasdx
> partition it:   fdasd -a /dev/dasdx (make one partition using whole
> deice)
> lvm format:  pvcreate /dev/dasdx1
> add to volume group:vgextend vg-name /dev/dasdx1   (where vg-name is
> the volume group name you're extending)
>
> You can then issue appropriate lvextend command to add space to the
> logical volume..  for example - add 2G to to testlv in testvg:
>
> lvextend +L 2G /dev/testvg/testlv
>
> Then issue appropriate resize commands for whatever filesystem..
>
> Hope that helps - not sure about SLES or Yast system tools for this - I
> always use command line.
>
> Scott Rohling
>
>
> On Wed, Mar 14, 2012 at 11:28 AM, Tom Duerbusch <
> duerbus...@stlouiscity.com> wrote:
>
>> I have an existing LVM that is near out of space.
>> I created it with the defaults that came with SLES 11 SP 1.
>>
>> Now I need to add a drive to the LVM pool.  But there doesn't seem to be
>> an option to add a volume to the pool.
>>
>> I have done the same thing with SLES 8, 9 and 10, so it is not like I
>> don't have an understanding of what is needed.
>>
>> So, I'm wondering if SLES 11 SP 1 just didn't include that option by
>> mistake, or if the defaults changed to making striping, or something else
>> that prevents just adding a disk to the pool, that I didn't pay attention
>> to.
>>
>> I'm now on the tangent of bringing up a test SLES 11 SP 1 system that I
>> can crash and/or destroy while playing around on adding a pack to an
>> existing LVM.  But just in case it is something simple, it is better to ask
>> the collective, before I spend the hours on researching the problem.
>>
>> Thanks
>>
>> Tom Duerbusch
>> THD Consulting
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390 
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/ 
>>
>
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/ 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wi

Re: Cannot add drive to existing LVM SLES 11 SP 1

2012-03-14 Thread Tom Duerbusch
It doesn't look stripped to me.

I might go with Scotts suggestion and use the command line format.
Then I can see if it is a LVM problem, or a yast problem.  (now that I have a 
test system)

Tom Duerbusch
THD Consulting

linux74:/ # pvscan
  PV /dev/dasdc1   VG LVM1   lvm2 [6.88 GB / 0free]
  PV /dev/dasdb1   VG LVM1   lvm2 [6.88 GB / 0free]
  Total: 2 [13.75 GB] / in use: 2 [13.75 GB] / in no VG: 0 [0   ]
linux74:/ # lvdisplay -m LVM1
  --- Logical volume ---
  LV Name/dev/LVM1/LVM
  VG NameLVM1
  LV UUIDgxnyIf-2xuo-ctvS-uYka-tN8T-UxJA-G23zHZ
  LV Write Accessread/write
  LV Status  available
  # open 1
  LV Size13.75 GB
  Current LE 3520
  Segments   2
  Allocation inherit
  Read ahead sectors auto
  - currently set to 1024
  Block device   253:0

  --- Segments ---
  Logical extent 0 to 1759:
Typelinear
Physical volume /dev/dasdc1
Physical extents0 to 1759

  Logical extent 1760 to 3519:
Typelinear
Physical volume /dev/dasdb1
Physical extents0 to 1759


linux74:/ #


>>> "Ayer, Paul W"  3/14/2012 3:04 PM >>>
> To see how the current lvm is configured I like to use the command:   
> lvdisplay  -m  lv_name 

> This will tell you what disks it's on, and what parts of the disks, and/or if 
> it's stripped or not .. 

> Most likely not stripped, but if it is then you will need to add the same 
> number of disks and disk 
  size that are already there ..


=
lvdisplay -m /dev/abcvg/abcvol

  --- Logical volume ---
  LV Name/dev/abcvg/abcvol
  VG Nameabcvg
  LV UUIDuAZIek-FTUL-6Hgb-ABrw-D6wH-h8zg-mjPxbr
  LV Write Accessread/write
  LV Status  available
  # open 1
  LV Size5.97 GB
  Current LE 191
  Segments   3
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:1

  --- Segments ---
  Logical extent 0 to 72:
Typelinear
Physical volume /dev/dasdm1
Physical extents0 to 72

  Logical extent 73 to 145:
Typelinear
Physical volume /dev/dasdn1
Physical extents0 to 72

  Logical extent 146 to 190:
Typelinear
Physical volume /dev/dasdo1
Physical extents0 to 44

===

lvdisplay -m /dev/xyzvg/xyzlv

--- Logical volume ---
  LV Name/dev/dgsa09vg/stripe_o05gsa1startlv
  VG Namexyzvg
  LV UUIDBWxYeH-927r-07jJ-E3zV-5FAc-Bofc-fB0S0o
  LV Write Accessread/write
  LV Status  available
  # open 1
  LV Size50.02 GB
  Current LE 12804
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 24832
  Block device   253:14

  --- Segments ---
  Logical extent 0 to 12803:
Typestriped
Stripes 97
Stripe size 64 KB
Stripe 0:
  Physical volume   /dev/dasdab1
  Physical extents  1479 to 1610
Stripe 1:
  Physical volume   /dev/dasdac1
  Physical extents  1479 to 1610
Stripe 2:
  Physical volume   /dev/dasdad1
  Physical extents  1479 to 1610
Stripe 3:
  Physical volume   /dev/dasdae1
  Physical extents  1479 to 1610

Goes on this way for 97 disks  


Paul 
617-985-8671


Confidentiality Notice:  The information contained in this email is intended 
for the confidential use of the above-named recipient(s).  If the reader of 
this message is not the intended recipient or person responsible for delivering 
it to the intended recipient, you are hereby notified that you have received 
this communication in error, and that any review, dissemination, distribution, 
or copying of this communication is strictly prohibited.  If you have received 
this in error, please notify the sender immediately and destroy this message.

Data Classification: Limited Access

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Scott 
Rohling
Sent: Wednesday, March 14, 2012 2:42 PM
To: LINUX-390@VM.MARIST.EDU 
Subject: Re: Cannot add drive to existing LVM SLES 11 SP 1

It's not too difficult to do this on the command line:

lsdasd  and figure out what the /dev/dasd device is - let's say it's dasdx

format it:   dasdfmt -b 4096 /dev/dasdx
partition it:   fdasd -a /dev/dasdx (make one partition using whole
deice)
lvm format:  pvcreate /dev/dasdx1
add to volume group:vgextend vg-name /dev/dasdx1   (where vg-name is
the volume grou

Re: Cannot add drive to existing LVM SLES 11 SP 1

2012-03-14 Thread Tom Duerbusch
Well, that didn't do it, but it did give me an indication on what it wants.

When I try to resize, I get:
You cannot resize the selected partition because the file system on this 
partition does not support resizing.

The partitions were originally created with EXT3.  That shouldn't be a problem. 
 The default in SLES 11 did change from Reiser to EXT3, but I have always used 
EXT3 as my standard on all the prior versions.

On a newly created test system, I can create a LVM using multiple drives, but I 
cannot add any drives to the LVM.  I can, however, create another LVM and add 
additional drives there, but that is different then expanding an existing LVM.

Right now, I'm on SLES 11 SP 1 without any additional maintenance.  There are 
only 7 patches for LVM on this service pack.  There seems like a hundreds of 
patches against yast.

Now I'm looking at patches that may address the failure to be able to add 
drives to an existing LVM.

Thanks

Tom Duerbusch
THD Consulting

>>> Mark Workman  3/14/2012 1:48 PM >>>
If you are using YAST:

Partitioner -> Volume Management -> double click volume group you want to
expand -> on the 'Overview' tab select 'Resize'

then select the disk you want add to the group.

Mark Workman
Shelter Insurance Companies
573.214.4672
mwork...@shelterinsurance.com 



From:   Tom Duerbusch 
To: LINUX-390@VM.MARIST.EDU 
Date:   03/14/2012 01:34 PM
Subject:Cannot add drive to existing LVM SLES 11 SP 1
Sent by:Linux on 390 Port 



I have an existing LVM that is near out of space.
I created it with the defaults that came with SLES 11 SP 1.

Now I need to add a drive to the LVM pool.  But there doesn't seem to be
an option to add a volume to the pool.

I have done the same thing with SLES 8, 9 and 10, so it is not like I
don't have an understanding of what is needed.

So, I'm wondering if SLES 11 SP 1 just didn't include that option by
mistake, or if the defaults changed to making striping, or something else
that prevents just adding a disk to the pool, that I didn't pay attention
to.

I'm now on the tangent of bringing up a test SLES 11 SP 1 system that I
can crash and/or destroy while playing around on adding a pack to an
existing LVM.  But just in case it is something simple, it is better to
ask the collective, before I spend the hours on researching the problem.

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/ 



This e-mail is intended only for its addressee and may contain information
that is privileged, confidential, or otherwise protected from disclosure.  If
you have received this communication in error, please notify us immediately by
e-mailing postmas...@shelterinsurance.com; then delete the original message.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Cannot add drive to existing LVM SLES 11 SP 1

2012-03-14 Thread Tom Duerbusch
I have an existing LVM that is near out of space.
I created it with the defaults that came with SLES 11 SP 1.

Now I need to add a drive to the LVM pool.  But there doesn't seem to be an 
option to add a volume to the pool.

I have done the same thing with SLES 8, 9 and 10, so it is not like I don't 
have an understanding of what is needed.

So, I'm wondering if SLES 11 SP 1 just didn't include that option by mistake, 
or if the defaults changed to making striping, or something else that prevents 
just adding a disk to the pool, that I didn't pay attention to.

I'm now on the tangent of bringing up a test SLES 11 SP 1 system that I can 
crash and/or destroy while playing around on adding a pack to an existing LVM.  
But just in case it is something simple, it is better to ask the collective, 
before I spend the hours on researching the problem.

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Oracle in virtual environments

2012-02-24 Thread Tom Duerbusch
All the Oracle performance documentation are based on cheap CPU, cheap memory, 
expensive I/O.
Mainframe are cheap I/O, expensive CPU (relatively), expensive memory 
(relatively).

I run some small production Oracle systems  (10g) in 600 MB with one at 750 MB. 
 I use OEM (Oracle Enterprise Manger) to see if there are getting good service 
time.  

I conserve memory, in order to have more machines running.  Yes, I force more 
I/O, but the Ficon attached DS6800 sure seems to be handling it.  CPU is 
running 45 to 50%.  Sure, out of the 45 Linux guests, more are relatively idle. 
 

Two Oracle production machine.
Two Oracle test machines (same table structure as production).
Two Oracle development machines (where table structures may be different).
One Oracle DBA machine (for me to test with).
Bunch of Samba servers (production and test).
Bunch of FTP servers (production and test).
One DB2 Connect production server.
One DB2 Connect test server.
Few NFS servers (production and test).

So my Oracle machines are not barn burners.  They get the job done and everyone 
is happy.
OEM shows we spike around 10,000 I/Os per second being serviced thru the SGA.  
The Linux guest isn't swapping and VM isn't paging.

You don't need 4G for Oracle, but if I was on a platform where it was hard to 
add memory, I would also start at 4 GB.  But for a VM user, start low, and 
adjust the VM Guest size and the SGA to suit what you need.

Tom Duerbusch
THD Consulting

>>> Mauro Souza  2/23/2012 3:54 PM >>>
My personal experience:
I had a client using Oracle on Intel (a 16GB RAM, multicore Dell server),
and they had a couple queries they had to run every day, at the end of the
day. The job was taking 29 minutes. They exported the database, imported on
a Linux on Z, same 16GB, same number of virtual processors, ran the same
queries. Result? 31 minutes.
They asked why the mighty mainframe was losing for an Intel box, and
mainframe had a lot of resources, that kind of talk. So we sit on the
console and began changing things.

We put only 3GB of memory on the Oracle database. Changed the IO to
direct-io, async-io. I don't remember exactly now, but I think we
configured like 20 io slaves, and put the query to run. The DBA said we
were insanely crazy, taking off memory from a database server, and
disabling cache. 9 minutes after, the query was over. He said that the
query failed, 9 minutes was too fast. And run again. And got 9 minutes, and
everything was all right. And we asked what kind of sorcery was that...

So don't be shy: take off memory from the database. Let something close to
1GB to be free to Linux, and the rest can go to SGA/PGA. zVM already have
cache. The control unit have cache. The physical disk have cache. Linux
don't need more caching. Measure status (use the table statistics, get the
data from Oracle enterprise manager), change memory, IO, Oracle parameters,
and check the performance data again.

Mauro
http://mauro.limeiratem.com - registered Linux User: 294521
Scripture is both history, and a love letter from God.


On Thu, Feb 23, 2012 at 12:45 PM, van Sleeuwen, Berry <
berry.vansleeu...@atos.net> wrote:

> > -Original Message-
> > From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
> > Damian Gallagher
>
>
> > I have worked on numerous migration projects, where we have taken the
> > Win/*nix system with hundreds of GB of SGA, and run it on Linux on IBM
> > zSeries with much less memory - my usual recommendation is to reduce by
> > 90% to start with, and we generally get there or better - in common with
> > this thread it takes a while to convince the DBAs :-). I have the
> numbers,
> > AWR, throughput etc from a specific project in front of me, so this is
> no idle
> > statement. Add hugepages to the mix, and it just gets better.
> >
>
> The question would also be how to convince them if even Oracle itself
> provides questionable or just wrong recommendations.
>
> The Installation script demands 4G memory, while it can be installed
> within 1.2G. It demands 1G+ /tmp space, eventhough it only uses 120M.
>
> The machine needs at least 1G per database, preferably more, according to
> the recommendations. We have proven this to be wrong in a system with 2
> databases within 1G. And it still performs very well.
>
> One of our production databases had a 'bad' performance and the Oracle
> tools recommend sizing the SGA to 10G (instead of 3.5G). And time and time
> again we prove the problem to be a bad userprogram or bad data.
>
> Even redbooks do not do better in this respect, for any product for that
> matter. They tend to present a "this is how we did it" instead of these are
> the requirements. Often they disregard other requirements such as multiple
> customer networks, security ru

Re: bogomips

2012-02-22 Thread Tom Duerbusch
In back of my mind, I keep on thinking:

Both UNIX (BSD) and LSD came out of University of California, Berkeley.
Coincidence?

Perhaps the person that came up with BOGOMIPS was having a bad trip?

Tom Duerbusch
THD Consulting

>>> Michael MacIsaac  2/22/2012 12:23 PM >>>
> What was BOGOMIPS ever used for?
Recurrent discussions about their uselessness? :))

"Mike MacIsaac"(845) 433-7061

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: bogomips

2012-02-22 Thread Tom Duerbusch
Leads to another tangent

What was BOGOMIPS ever used for?
I could see, back on slower Intel processors (486), you might need it to 
calculate the amount of times you have to spin a processor to product a delay 
of xx milliseconds.

Is there any, near modern, software that uses it now?  Whether on 390 
processors or other platforms?

Tom Duerbusch
THD Consulting

>>> Alan Altmark  2/21/2012 2:52 PM >>>
On Tuesday, 02/21/2012 at 03:27 EST, William Carroll
 wrote:
> Isn't a BOGOMIP just a calculated loop value (ie how many times through
> the loop) used to do some internal timing.
> at best it give a course indicator that one processor is faster but I
> would say relative to the same architecture as one may favor the loop
> better than others.
> so not all BOGOMIPs are not equal.

It doesn't really matter precisely what a BOGOMIP is, except to say that
it's a measure of the apparent speed of the instruction mix used to
calculate the number at the moment it was calculated.  Outside of a lab,
you can't even compare two consecutive bogomips calculations.  Beyond
that, it has no meaning except to generate discussion about how
meaningless it is.  :-)

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM System Lab Services and Training
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com 
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Question about zVM and MF configuration

2012-01-04 Thread Tom Duerbusch
In my experience, it is Linux that interferes with zVSE , but that is
only in a scarce memory configuration.

Consider that zVSE workloads really don't change the amount of memory
needed, month to month.
However, add a Websphere Linux image (take any "large to you" linux
application), and add it to an VM system that is actively using most of
its memory, and you page.  You page everyone else out.  oops.
Did I do a SET RESERVE on my production guests?  No?
Did I have plenty of paging devices available for this new spike? No?
What is the largest partition in VSE that will feel the effect of being
paged out? CICS/TS
Users are unhappy, until the system settles down in 5-10 minutes.

Other reasons for running a system under LPAR instead of zVM:
1.  Needs more resources then zVM can supply (i.e. more than 256 GB
real storage)
2.  Needs hardware that zVM can't manage (sysplex timer, for example).
3.  If a guest is running 90%+ cpu, you might want to move it out from
under zVM and dedicate processors to it.

If you are not having any problems with performance, than running under
a single LPAR for most things, is great.

You might want to have a small LPAR for testing (VM installs, Operator
training etc) or to provide Live Guest Relocation of your near 24X7
Linux images for when you apply maintenance or have any other scheduled
outage of your primary zVM system.

Tom Duerbusch
THD Consulting
>>> Michael Simms  1/4/2012 3:50 PM >>>
I hope everyone had a safe and happy one Holiday Season.

I need some guidance, advice and/or ammunition on an issue that has
come up regarding mainframe configuration.  

I have found myself having to defend the way we are now configured  vs.
a co-worker who has come back from
class saying his instructor said we should run 2 LPARs,
one with zVM and zVSE and the other one house our production zLinux and
DB2
images. My co-worker has no mainframe experience and does not know
our hardware or complete software configuration. Apparently my
co-worker fears
VSE ‘interference’ with his zLinux images as well as a fear that he
would crash
the zVM system. Not sure what he has in his plans that would cause such
zVM instability.  
 
We currently have: z114, 1 partial
CP, 1 IFL, 24GB storage (18/6), 2 FICON cards, 2 OSA cards, 1 zVM LPAR
with both CPUs,
zVM V6.1, zVSE, zLinux of various flavors SuSE running and DB2 running
in several
or more zLinuxes. Don’t know how many DB2 zLinux images yet. We
already have a couple of production zLinux and are exploring another
set of
zLinux that would maybe use the zVSE VSAM Redirector and DB2. I suggest
that we
add to our current configuration as it would better share resources
such as
memory and I/O. I also feel that it would be easier to manage 1 LPAR
instead of
2 LPARs and all their various pieces and parts that would also include
zVM test
machines and 2 test VSE machines.
 
I have tried to explain how mainframe architecture and zVM have
been designed as a sharing environment while at the same time
protecting
against influences from any given guest machine, should the
configuration be
configured just right. I might have partially agreed with his
instructor had
not zVM come to support all manner of CPU in recent years, for example
accommodating
both CP and IFLs. We are also on a limited budget and I don’t know if
we’d be
able to purchase more storage or Chpids. Based on my years experience,
I have poked, prodded and received advice for our system to where we
have great performance today, both traditional and non-traditional
workloads.  
 
Does anyone have suggestion/points to argue one way or the
other? Do you have some examples of something similar, one way or
another, to
what we have or will soon have? You probably would like some more
input
variables? Just let me know and I’ll provide. 

I appreciate any and all feedback!

Thanks.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Undeleting files

2011-12-16 Thread Tom Duerbusch
Thanks for the options.

I ended up using extundelete.

First flashcopy the 3390-3 that this directory was on.  Never take a
chance on making things worse.  Backup first.

The files lost were VSE Virtual Tape files.  I could mount the real
3420 volumes again and copy them to virtual tape, so that was my fall
back solution.

When I undeleted the files to an empty 3390-3, it filled up the volume.
 So I ended up allocating a 3390-9 to hold the undeleted files.  (used
35% of the mod 9)

extundelete came up with over 300 files.  Most of them failed in the
undelete process.  This seemed to be from older files that existed that
were deleted and the space reused, most likely by my virtual tapes.
The process didn't produce any filenames.  Everything was recreated as
"file.xxx" where xxx is a number, perhaps a directory block id
or something.

So I ended up running each file thru tapemap to see what was on it. 
Also, the tape hdr label would tell me the volser which is what I used
as a filename.

Long process.  Took about 4 hours to recover 59 tape files and map them
which also validated there was a trailing tapemark.

I had an existing "dir" list of the directory I accidently deleted.  So
I knew what files should be there and their filesize.

Only 1 file was not recoverable.  I think that was pretty good.

Good work for a Friday..

Tom Duerbusch
THD Consulting

>>> Rafael Godinez Perez  12/15/2011 4:05 AM >>>
El 14/12/11 23:00, Tom Duerbusch escribió:
> Where I know the answer to this question, generally.  I wonder if
this can be done in a very defined sitituation.
>
> I have disk "/dev/dasdb1", formatted with ext3.
> There is one directory on it.
> That directory had about 40 files on it of a few megabytes each.
> This is SLES 10 SP 2.
>
> I connected to the Linux image with WINSCP.
> I bought up that directory in one pane and in the other pane, I
bought up my thumb drive.
> I wanted to copy the files to my thumb drive.
>
> Instead of copying the files, I thought syncing the directories would
be easier.
> Well, I synced an empty directory to the Linux directory.  All files
are gone.
>
> In most cases, recovering deleted files is very dependant on if any
of the space or directory structure has been reused.  In this case, the
space hasn't been reused, but I don't know if the deletion of 40 files,
one at a time, would reuse the directory blocks or just mark them
available.
>
> Before I go too far in this....
> Am I just out of luck?
> Or is there a decent chance I can recover these files?
>
> Thanks
>
> Tom Duerbusch
> THD Consulting
>
>
--
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
>
--
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/ 
You may want to try this tool.
It worked for me many times.

http://www.cgsecurity.org/wiki/PhotoRec_Step_By_Step 

HTH,
Rafa.

-- 
Rafael Godínez Pérez
Red Hat - Senior Solution Architect EMEA
RHCE, RHCVA, RHCDS
Tel: +34 91 414 8800 - Ext. 68815
Mo: +34 600 418 002

Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, Planta
3ºD, 28016 Madrid, Spain
Dirección Registrada: Red Hat S.L., C/ Velazquez 63, Madrid 28001,
Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Undeleting files

2011-12-14 Thread Tom Duerbusch
Where I know the answer to this question, generally.  I wonder if this can be 
done in a very defined sitituation.

I have disk "/dev/dasdb1", formatted with ext3.
There is one directory on it.
That directory had about 40 files on it of a few megabytes each.
This is SLES 10 SP 2.

I connected to the Linux image with WINSCP.
I bought up that directory in one pane and in the other pane, I bought up my 
thumb drive.
I wanted to copy the files to my thumb drive.

Instead of copying the files, I thought syncing the directories would be easier.
Well, I synced an empty directory to the Linux directory.  All files are gone.

In most cases, recovering deleted files is very dependant on if any of the 
space or directory structure has been reused.  In this case, the space hasn't 
been reused, but I don't know if the deletion of 40 files, one at a time, would 
reuse the directory blocks or just mark them available.

Before I go too far in this
Am I just out of luck?
Or is there a decent chance I can recover these files?

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: FICON-Attached 3590 Tape Drives Not Detected

2011-11-09 Thread Tom Duerbusch
When I use 3590 tapes here, this is what I do:

vary on 5b1 (vary it off first on the other lpar)
attach 5b1 LINUX72

>From Linux:

lstape
chccwdev -e 0.0.05b1  (set drive online)
lstape
mtst -f /dev/ntibm0 rewind
mtst -f /dev/ntibm0 fsf 1
mtst -f /dev/ntibm0 compression 1

However, you have to install the mtst tar file (part of the IBM code) to use 
the mtst command.
Without mtst, I don't believe that you can turn on/off compression on your 
drives.

Tom Duerbusch
THD Consulting

>>> Edward Jaffe  11/5/2011 4:50 PM >>>
Hello,

I have RHEL 6 Linux running as a guest under z/VM 6.1 on our z10 BC. It has the
following 3590-H tape drives assigned in the z/VM directory:

DEDICATE 1500 1500 MULTIUSER
DEDICATE 1501 1501 MULTIUSER
DEDICATE 1502 1502 MULTIUSER
DEDICATE 1503 1503 MULTIUSER

A query command shows the drives are attached to the Linux guest:

q 1500-1503
TAPE 1500 ATTACHED TO ZLINUX1  1500 R/W  NOASSIGN MULTIUSER
TAPE 1501 ATTACHED TO ZLINUX1  1501 R/W  NOASSIGN MULTIUSER
TAPE 1502 ATTACHED TO ZLINUX1  1502 R/W  NOASSIGN MULTIUSER
TAPE 1503 ATTACHED TO ZLINUX1  1503 R/W  NOASSIGN MULTIUSER

However, the lstape command shows no drives:

FICON/ESCON tapes (found 0):
TapeNo  BusID  CuType/Model DevType/Model   BlkSize State   Op  MedState

SCSI tape devices (found 0):
Generic DeviceTarget   Vendor   ModelType State

I found the driver here:

/lib/modules/2.6.32-131.17.1.el6.s390x/kernel/drivers/s390/char/tape_3590.ko

What am I missing? How can I get Linux to recognize these tape drives?

--
Edward E Jaffe
Phoenix Software International, Inc
831 Parkview Drive North
El Segundo, CA 90245
310-338-0400 x318
edja...@phoenixsoftware.com 
http://www.phoenixsoftware.com/ 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Window question

2011-09-26 Thread Tom Duerbusch
dir z: /s

Tom Duerbusch
THD Consulting   

>>> Eddie Chen  9/26/2011 3:56 PM >>>
Does any know the command in WINDOW where I can list every files/directories 
   Under my Z drive.


   Similar to the Linux command of
   "ls -lR" where I can list all the files/directory under the file system.

Thanks

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Mark Post
Sent: Monday, September 26, 2011 3:37 PM
To: LINUX-390@VM.MARIST.EDU 
Subject: Re: RH NFS Server

>>> On 9/26/2011 at 03:04 PM, "Dazzo, Matt"  wrote: 
> I finally got some messages to the /var/log/messages file. What might these 
> means? Thanks Matt
> 
> Sep 26 14:54:35 lntest1 mountd[1264]: authenticated mount request from 
> 27.1.39.74:1023 for /home/matt (/home/matt)
> Sep 26 14:54:35 lntest1 kernel: nfsd: request from insecure port 
> (27.1.39.74:1062)!
> [root@lntest1 log]#

It means that the z/OS client initiated the mount request on outgoing port 
1062.  Since only root can open ports between 0-1023, those are called 'secure 
ports" and anything else is referred to as "unprivileged" or "insecure" ports.  
Some NFS server implementations don't like mount requests coming in on 
unprivileged ports, since it means that some process that might not be running 
as root has done that.

I don't recall if the z/OS NFS client can be made to only make requests on 
secure ports or not.  If not, then you'll have to tell your NFS server to 
accept requests on unprivileged ports.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/ 
Please consider the environment before printing this email.

Visit our website at http://www.nyse.com 



Note:  The information contained in this message and any attachment to it is 
privileged, confidential and protected from disclosure.  If the reader of this 
message is not the intended recipient, or an employee or agent responsible for 
delivering this message to the intended recipient, you are hereby notified that 
any dissemination, distribution or copying of this communication is strictly 
prohibited.  If you have received this communication in error, please notify 
the sender immediately by replying to the message, and please delete it from 
your system.  Thank you.  NYSE Euronext.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: RSH RPM

2011-09-19 Thread Tom Duerbusch
I'm running SLES 11 SP 1 and I didn't have any problem finding the RSH server.

yast
Network Services
Network Services (xinetd)
   Service:  exec
Toggle Status
rsh-server installation:  Install
ok
Toggle Status
Finish
quit

Yes, it is unsecure.  But if you are only bouncing around within the mainframe 
(all using the same subnets), you never get out on the wire.

Tom Duerbusch
THD Consulting

>>> saurabh khandelwal  9/19/2011 4:32 AM >>>
Hello,
 I want to configure RSH/RLOGIN  into my SUSE 11 z/linux setup. and
I am looking for rsh-server-0.17-25.4  rpm, which will help me to run rsh
server.

Can you help me with the website to download it for z/Linux RPM.


--
Thanks & Regards
Saurabh Khandelwal

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: excellent explanation of cloud computing....

2011-09-14 Thread Tom Duerbusch
I don't really know.  I never bothered to look it up.

Back in 73/74 time frame, a company in St. Louis was getting rid of their old 
teletypes.  I seem to remember they were going to the newer teletype model 43 
??  Anyway, he pointed to the ones without paper tape, and said they were ASRs 
and pointed to the ones with paper tape and say they were KSRs.  I assumed he 
knew what he was talking about.  

But with the paper tape models there were 5 level (bodot?) and 8 level 
(something).  

The 110 baud modem, which weighed 40-50 pounds, was most of the stand it was 
on.  We hooked it into the phone system, which was illegal back then, and I 
could dial back into the mainframe (OS/VS 1 at that time) and use TSO.  

Printing out a listing on the teletype in my apartment was sure noisy.  

Tom Duerbusch
THD Consulting

>>> "McKown, John"  9/14/2011 1:16 PM >>>
I thought the paper tape TTYs were called ASR (Automatic Send Receive?) instead 
of KSR (Keyboard Send Receive). I remember using one back in high school at a 
special program at TCU in Ft. Worth. I was impressed. Not with the KSR, but 
with the professor who apologized for being late - he was trading in his 2 year 
old Rolls Royce for a new one.

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * 
john.mck...@healthmarkets.com * www.HealthMarkets.com 

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On 
> Behalf Of Tom Duerbusch
> Sent: Wednesday, September 14, 2011 12:43 PM
> To: LINUX-390@VM.MARIST.EDU 
> Subject: Re: excellent explanation of cloud computing
> 
> And if you really wanted to keep a good backup, you used the 
> mylar tape.
> 
> I still have my Teletype 33 with paper tape unit (I think it 
> was a KSR model) along with supplies.
> I'm good to go when the rapture occurs!
> 
> Tom Duerbusch
> THD Consulting
> 
> >>> Paul Dembry  9/14/2011 12:18 PM >>>
> Reminds me of the old days when I used a teletype and telephone modem,
> although back then I had a paper tape backup for my programs.
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO 
> LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/ 
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO 
> LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/ 
> 
> 
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: excellent explanation of cloud computing....

2011-09-14 Thread Tom Duerbusch
And if you really wanted to keep a good backup, you used the mylar tape.

I still have my Teletype 33 with paper tape unit (I think it was a KSR model) 
along with supplies.
I'm good to go when the rapture occurs!

Tom Duerbusch
THD Consulting

>>> Paul Dembry  9/14/2011 12:18 PM >>>
Reminds me of the old days when I used a teletype and telephone modem,
although back then I had a paper tape backup for my programs.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Regina on SLES 11 SP 1

2011-08-31 Thread Tom Duerbusch
Thanks Mark

That was it.
I would have thought that my current directory would be searched prior to 
searching the path.

I didn't need to specify a path for the execs I created on SLES7 thru SLES10.
I guess it is possible that I accidently, always, put the execs in a directory 
that was in the path, but that is kind of stretching it.

Anyway, I can work with specifying the path.

Thanks

Tom Duerbusch
THD Consulting

>>> Mark Post  8/31/2011 4:18 PM >>>
>>> On 8/31/2011 at 05:05 PM, Tom Duerbusch  wrote: 
> I downloaded and mounted the sdk for SLES 11 SP 1 and installed regina.
> 
> I then code a simple rexx exec to validate that rexx is functional.  But 
> when I execute it, I get:
> 
> linux74:/home/duerbuscht # regina thd01.rexx
> Error 3 running "thd01.rexx": Failure during initialization
> Error 3.1: Failure during initialization: Program was not found

Try doing "regina ./thd01.rexx" or putting thd01.rexx somewhere in your PATH.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Regina on SLES 11 SP 1

2011-08-31 Thread Tom Duerbusch
I downloaded and mounted the sdk for SLES 11 SP 1 and installed regina.

I then code a simple rexx exec to validate that rexx is functional.  But when I 
execute it, I get:

linux74:/home/duerbuscht # regina thd01.rexx
Error 3 running "thd01.rexx": Failure during initialization
Error 3.1: Failure during initialization: Program was not found

The following is the rexx code:

linux74:/home/duerbuscht # cat thd01.rexx
/* regina test 01 */
do i = 1 to 10
say 'hi'
end

And the version of Regina is:

linux74:/home/duerbuscht # regina -v
REXX-Regina_3.4(MT) 5.00 30 Dec 2007

I am stumped.

So the first question isdoes anyone have Regina working on a SLES 11 SP 1 
system?
The iso that I downloaded from Novell is:
SLE-11-SP1-SDK-DVD-s390x-GM-DVD1.iso   
Did you use that one, or did you get Regina some other way?

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Novell Netware to Samba

2011-08-29 Thread Tom Duerbusch
I don't know.  I do know enough to know that I don't know enough to know if I 
know enough to continue .
Hence my questions to the group.

Per the documentation, Samba can use LDAP.
And from your first web reference, it sure looks real easy.  And much easier 
now that I'm looking at my Netware Server properties.  I think I know what gets 
filled in.  Can it really be that simple?

Tom Duerbusch
THD Consulting

>>> Patrick Spinler  8/29/2011 2:20 PM >>>

It's been a while since I've worked much with Samba, but just a thought:

I had thought that Samba could keep user info in and do authentication
to an LDAP directory, no  (e.g. http://goo.gl/3vx1E or
http://goo.gl/V7Sro or http://goo.gl/eUHXM)?  Given that, just load
eDirectory with the appropriate samba ldap schema, update your user
definitions to include the samba attributes, and point samba servers at
your eDirectory for user info and authentication.  Voila.

-- Pat

On 8/29/11 12:54 PM, Tom Duerbusch wrote:
> To clarify...
> 
> Unless I'm force to, which would terminate the project, we are not looking 
> for a complete replacement for Netware.  At this point, it would be "cool" to 
> be able to add in a Samba server to the existing authentication process.
> 
> We don't do print serving.  As far as I know, our PCs only print to a 
> JetDirect attached printer (i.e. LPR).
> 
> When I look at the properties for the Netware FileServer, IPX: is shown as 
> N/A.  IP: had an IP address associated with it.  I'm guessing that means we 
> are doing authentication over IP.
> 
> Right now, we have 9 Samba servers running.  Some in production and others in 
> test.
> Due to their limited number of users, I didn't need to worry about 
> authentication.
> Our "manual" server, everyone has read access to it.
> Another server, is part of an application which uses UNC for viewing.  The 
> end users don't have any idea they are accessing Samba.  All 
> additions/updates are done under the covers by the application.
> I'm just in the process of rolling out Samba servers to replace LANRES/VSE.  
> Couple dozen users.  Not a security headache.  They do have to enter in their 
> Samba password if it is not the same as their Netware password (which it 
> isn't).
> 
> And with the success of these Samba servers, management is asking if we can 
> replace the 2,000 Netware users with Samba.  The big sticking point that I 
> know of, if I need to be able to sync the Samba password with their Novell 
> password.  Lack of that function affects everyone.  I would like to be able 
> to keep using Netware to define the users directory, but that is more of a 
> training issue then a real requirement.  We can use SWAT to manage Samba 
> users if necessary.
> 
> Thanks for the suggestions.
> 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Novell Netware to Samba

2011-08-29 Thread Tom Duerbusch
To clarify...

Unless I'm force to, which would terminate the project, we are not looking for 
a complete replacement for Netware.  At this point, it would be "cool" to be 
able to add in a Samba server to the existing authentication process.

We don't do print serving.  As far as I know, our PCs only print to a JetDirect 
attached printer (i.e. LPR).

When I look at the properties for the Netware FileServer, IPX: is shown as N/A. 
 IP: had an IP address associated with it.  I'm guessing that means we are 
doing authentication over IP.

Right now, we have 9 Samba servers running.  Some in production and others in 
test.
Due to their limited number of users, I didn't need to worry about 
authentication.
Our "manual" server, everyone has read access to it.
Another server, is part of an application which uses UNC for viewing.  The end 
users don't have any idea they are accessing Samba.  All additions/updates are 
done under the covers by the application.
I'm just in the process of rolling out Samba servers to replace LANRES/VSE.  
Couple dozen users.  Not a security headache.  They do have to enter in their 
Samba password if it is not the same as their Netware password (which it isn't).

And with the success of these Samba servers, management is asking if we can 
replace the 2,000 Netware users with Samba.  The big sticking point that I know 
of, if I need to be able to sync the Samba password with their Novell password. 
 Lack of that function affects everyone.  I would like to be able to keep using 
Netware to define the users directory, but that is more of a training issue 
then a real requirement.  We can use SWAT to manage Samba users if necessary.

Thanks for the suggestions.

Tom Duerbusch
THD Consulting

>>> David Boyes  8/28/2011 1:59 AM >>>
> > The eDirectory product is only available on Netware (as you have now)
> > and Open Enterprise Server (OES) running on SLES on Intel/AMD systems.
> > If you're going to be running OES for eDirectory, you might as well
> > use the other parts of OES on that system, which include the various
> > file server functions that people are used to from Netware.  I believe
> > this does involve Samba, but I'm not all that familiar with OES.

Well, there's 3 things to solve here: authentication, file service, and 
printing. 

You're going to have to touch all the endpoints for all three things. 

Best solution (without trying to convince Attachmate to port OES) is: 

1) Use Active Directory or Kerberos/LDAP to replace Netware authentication. 
2) Use Samba and winbind to manage the file service component
3) Use CUPS to replace the print functions. 

AD is essentially Kerberos/LDAP, but with a pretty face on it. If you have 
extensive reliance on Windows, you might as well use AD too. It addresses most 
of the issues that made Netware user management and authentication a PITA in 
the past, although it has its own evils. 

The winbind piece will cause Samba to deal properly with the uid and gid issues 
-- if you convert to AD using the netware to AD tools, most of the ACLs will 
still work properly. 

CUPS will probably need some tinkering with if you have non-mainstream (eg, 
non-HP) printers, especially Canon printers (for some reason, they seem to hate 
Linux and Mac users and don't publish good PPF files for their printers).

Contact me offlist if you want to discuss it. 

-- db




> 
> My knowledge is a bit rusty on this but let me clarify a few things.
> 
> eDirectory is a standalone product althought it is included in many Novell
> products like OES.  eDirectory can run on a number of OSes like Linux, 
> Solaris,
> AIX and Windows and is an LDAP based directory.  eDirectory only runs on
> Linux on x86/x86_64 based systems.
> 
> One option is that you could keep the Netware around and connect Samba
> via LDAP.  eDirectory on Netware 5 is old and that might cause problems.
> 
> A better option would be to get eDirectory installed on Linux, join the
> directory tree on the Netware server so it can become a replica, and then
> promote the directory on the Linux box so the Netware server could be
> turned off.  Not that I want to see a Netware box get turned off but maybe
> the organization is more comfortable with Linux.
> 
> A couple of ideas for you to contemplate
> 
> Mike
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions, send email to
> lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
> --
> For more information on Linux on System z, visit http://wiki.linuxvm.org/ 

---

Novell Netware to Samba

2011-08-26 Thread Tom Duerbusch
I've been asked to look into something that I'm not sure how much I know 
about...  But that is par for the course.

Somewhere, we have a Novell Netware File Server (apparently NetWare 5.70.05).
The person that really knew and installed this server, left over a year ago, 
and there is really not going to be a replacement for the job function/title, 
or for that matter, the position will not be filled either.

This fileserver supports the F: drive on some 2,000 PCs.  It represents the 
users private network space.
The SAN that is dedicated to this is about 2TB.

I've been asked to see what it would take to convert this from Netware to Samba 
on our mainframe (SUSE 11 SP 1).

Well, we would need to add in some drives to our DS6800 (ficon attached).
I believe we have sufficient CPU and memory in to support this.

I've never tried, but there is documentation about some sort of automatic way 
of adding users to Samba, but I don't know if this applies to my configuration.

Of the many security systems we have, it looks like NetWare is using eDirectory.
I don't know if other systems are also using eDirectory.  If they are, then it 
would be nice to have Samba keep using eDirectory.  I might have skipped it but 
I haven't found any documentation about Samba using eDirectory.
(I didn't know until 10 minutes ago that we are using eDirectory as I thought 
we were using Microsofts something...something.)

If this conversion can be somewhat easily done, then I will keep going forward 
with this research.  But if it is going to be complicated, I'm not so 
interested.  Of course management has a lot to say about what I'm interested in 
.

BTW, on the Novell login, we enforce password changes and it syncs with the 
Windows password.
When the user Mounts the F: drive, they are never asked for a userid/password.  
We don't want to loose that feature, with 2,000 users.

On another Samba server I have, which isn't connected to eDirectory, when 
smbpasswd has your current Novell password, mount requests are handled without 
authentication prompting.  When smbpasswd doesn't have the current Novell 
password, it does prompt for the password Samba knows about (not your current 
Novell password).  So it seems that, within Windows, a mount will pass your 
current Windows userid/password (which it kept in sync) to Samba.

Any guidance would be appreciated.

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: disabled IPV6, now down.

2011-08-03 Thread Tom Duerbusch
mv -vi /etc/modprobe.d/50-ipv6.conf /root/   
mv -vi /etc/modprobe.d/50-ipv6.conf /root/   
`/etc/modprobe.d/50-ipv6.conf' -> `/root/50-ipv6.conf'   
linux76:~ # modprobe ipv6
modprobe ipv6
NET: Registered protocol family 10   
lo: Disabled Privacy Extensions  
linux76:~ # modprobe qeth-l3 
modprobe qeth-l3 
qeth.2c6def: register layer 3 discipline 
linux76:~ #  


rcnetwork restart
rcnetwork restart
Shutting down network interfaces:
Shutting down service network  .  .  .  .  .  .  .  .  ...done   
Hint: you may set mandatory devices in /etc/sysconfig/network/config 
Setting up network interfaces:   
Setting up service network  .  .  .  .  .  .  .  .  .  ...done   
SuSEfirewall2: Setting up rules from /etc/sysconfig/SuSEfirewall2 ...
SuSEfirewall2: Warning: no interface active  
SuSEfirewall2: batch committing...   
SuSEfirewall2: Firewall rules successfully set   
linux76:~ #  

ifconfig
ifconfig
loLink encap:Local Loopback 
  inet addr:127.0.0.1  Mask:255.0.0.0   
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1  
  RX packets:2 errors:0 dropped:0 overruns:0 frame:0
  TX packets:2 errors:0 dropped:0 overruns:0 carrier:0  
  collisions:0 txqueuelen:0 
  RX bytes:100 (100.0 b)  TX bytes:100 (100.0 b)

linux76:~ # 

So I rebooted the server and now I'm up and running

ifconfig -a
eth0  Link encap:Ethernet  HWaddr 02:00:01:00:00:14
  inet addr:192.168.193.176  Bcast:192.168.195.255  Mask:255.255.252.0 
  inet6 addr: fe80::200:100:100:14/64 Scope:Link   
  UP BROADCAST RUNNING MULTICAST  MTU:1492  Metric:1   
  RX packets:219 errors:0 dropped:0 overruns:0 frame:0 
  TX packets:209 errors:0 dropped:0 overruns:0 carrier:0   
  collisions:0 txqueuelen:1000 
  RX bytes:21915 (21.4 Kb)  TX bytes:87504 (85.4 Kb)   
   
loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0  
  inet6 addr: ::1/128 Scope:Host   
  UP LOOPBACK RUNNING  MTU:16436  Metric:1 
  RX packets:2 errors:0 dropped:0 overruns:0 frame:0   
  TX packets:2 errors:0 dropped:0 overruns:0 carrier:0 
  collisions:0 txqueuelen:0
  RX bytes:100 (100.0 b)  TX bytes:100 (100.0 b)   
   
linux76:~ #

And back to my original problem of the:

MARTIAN SOURCE 192.168.195.255 FROM 192.168.193.176, ON DEV ETH0
messages...

It's the only image that is producing these messages.
I must of done something in my playing around to cause them...

Well, onward and downward!

Thanks

Tom Duerbusch
THD Consulting

>>> Mark Post  8/3/2011 12:18 PM >>>
>>> On 8/3/2011 at 12:47 PM, Tom Duerbusch  wrote: 
> No big deal in recreating this system.  But it would be an interesting 
> learning exercise in recovering from my failure.

mv -vi /etc/modprobe.d/50-ipv6.conf /root/
modprobe ipv6
modprobe qeth-l3


That might be enough.  Or, you may need to:
rcnetwork restart


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390

Re: disabled IPV6, now down.

2011-08-03 Thread Tom Duerbusch
Well, it didn't say it did anything:

modprobe ipv6 
linux76:~ #   


Tom Duerbusch
THD Consulting

>>> Scott Rohling  8/3/2011 12:04 PM >>>
Does a 'modprobe ipv6' have any effect?

Scott Rohling

On Wed, Aug 3, 2011 at 10:47 AM, Tom Duerbusch
wrote:

> I've been getting the "MARTIAN SOURCE 192.168.195.255 FROM 192.168.193.176,
> ON DEV ETH0" on one of my test machines for two months.
>
> So I decided to check it out and see if I can fix the problem.
>
> Per a Google search, I found a suggestion  which looked reasonable:
>
> More details would be needed.
> ifconfig -a
> route -n
> cat /etc/sysconfig/network/routes
>
> When I did them, I saw that IPV6 was up and running.  I thought some of my
> playing around caused IPV6 to be enabled.  I didn't realize that IPV6 comes
> up by default and we are not IPV6 on our network, so I decided to see what
> happens when I disable it.
>
> So, yast, network, network settings, Global Options tab, and disable IPv6,
> save it and reboot.
>
> Of course now, I can't get in (other than via the console).
> I lost my eth0 adapter:
>
> ifconfig  -a
> loLink encap:Local Loopback
>  inet addr:127.0.0.1  Mask:255.0.0.0
>  UP LOOPBACK RUNNING  MTU:16436  Metric:1
>  RX packets:2 errors:0 dropped:0 overruns:0 frame:0
>  TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:0
>  RX bytes:100 (100.0 b)  TX bytes:100 (100.0 b)
>
> linux76:~ #
>
>
> I tried to add eth0 back in:
>
> ifconfig eth0 add fe80::200:100:100:14/64
> No support for INET6 on this system.
> linux76:~ #
>
> But I don't know how to add support for INET6 via the command line
> interface.
>
> This is SLES 11 SP 1.
>
> No big deal in recreating this system.  But it would be an interesting
> learning exercise in recovering from my failure.
>
> Tom Duerbusch
> THD Consulting
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/ 
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


disabled IPV6, now down.

2011-08-03 Thread Tom Duerbusch
I've been getting the "MARTIAN SOURCE 192.168.195.255 FROM 192.168.193.176, ON 
DEV ETH0" on one of my test machines for two months.

So I decided to check it out and see if I can fix the problem.

Per a Google search, I found a suggestion  which looked reasonable:

More details would be needed.
ifconfig -a
route -n
cat /etc/sysconfig/network/routes

When I did them, I saw that IPV6 was up and running.  I thought some of my 
playing around caused IPV6 to be enabled.  I didn't realize that IPV6 comes up 
by default and we are not IPV6 on our network, so I decided to see what happens 
when I disable it.

So, yast, network, network settings, Global Options tab, and disable IPv6, save 
it and reboot.

Of course now, I can't get in (other than via the console).
I lost my eth0 adapter:

ifconfig  -a  
loLink encap:Local Loopback   
  inet addr:127.0.0.1  Mask:255.0.0.0 
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:2 errors:0 dropped:0 overruns:0 frame:0  
  TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0   
  RX bytes:100 (100.0 b)  TX bytes:100 (100.0 b)  
  
linux76:~ #   


I tried to add eth0 back in:

ifconfig eth0 add fe80::200:100:100:14/64  
No support for INET6 on this system.   
linux76:~ #

But I don't know how to add support for INET6 via the command line interface.

This is SLES 11 SP 1.

No big deal in recreating this system.  But it would be an interesting learning 
exercise in recovering from my failure.

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Showing a process running in BG

2011-08-01 Thread Tom Duerbusch
Thanks Scott.

That pretty well shows that my tar is, somehow, running in background.

root 30702  2.1  1.6   2920  1156 ?R11:57   0:01 tar -b 64 -vv -

The 'jobs' command doesn't show anything as I didn't start the 'tar' within my 
session.  
Although both my telnet session and the rexec are being executed under root, 
'jobs' seems to be based on your session and not all background processes.

In the short term, if I don't redirect the output to a file, the output is 
displayed in the VSE job that submitted the REXEC command.  I can save those 
printouts so we know what is on our backup tapes.  Much better than having tar 
scan each of the tapes for what files are on them, or at least, much, much, 
quicker.

Thanks for the help.
Boy, did I learn a lot of other stuff while researching this problem.

Tom Duerbusch
THD Consulting

>>> Scott Rohling  8/1/2011 11:38 AM >>>
The 'jobs' command lists all active or stopped jobs (in the current shell)..
 also -  ps -aux shows a '+' under the Status field if it's running in the
foreground.   With ps -ef the PID and PPID fields may help in determining
who called what.   Hope this helps.

Scott Rohling

On Mon, Aug 1, 2011 at 9:32 AM, Tom Duerbusch wrote:

> I have a process that may or may not be running in background.
>
> When I use any of the forms of "ps", it shows the process running, but, I
> don't understand if any of the fields being displayed, indicate that this is
> a BG process.  It all looks the same to me .
>
> If the process is running in the background, I need to follow the path of
> how did it get there (bg).  If the process isn't running in background, I
> have a different problem all together.
>
> Thanks
>
> Tom Duerbusch
> THD Consulting
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/ 
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Showing a process running in BG

2011-08-01 Thread Tom Duerbusch
I have a process that may or may not be running in background.

When I use any of the forms of "ps", it shows the process running, but, I don't 
understand if any of the fields being displayed, indicate that this is a BG 
process.  It all looks the same to me .

If the process is running in the background, I need to follow the path of how 
did it get there (bg).  If the process isn't running in background, I have a 
different problem all together.

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: tar with tape drives

2011-07-29 Thread Tom Duerbusch
And another thing.

It isn't a problem with REXEC or REXECD.

Instead of executing each command via REXEC to Linux, I created a script which 
had all the commands in it.
So, REXEC only executed the script.
But what happened was

The tar was being executed.
Long before the tar completed, bash started executing the remainder of the 
commands in the script.  However tar was still running (and keeping the tape 
drive in use).

This is similar to tar being executed asynchronously.

However, when I run the script interactively, from Putty (actually Kitty), the 
command runs like it is suppose to.  Only when the tar completes does the next 
command start to executed.

So, it seems, at this point, that tar knows when it is being executed remotely 
and returns control early (and continues to execute).

However, executing the script using VM's REXEC, didn't produce the asynchronous 
behavior.

So, REXEC from VSE using the CSI 1.5E stack seems to be telling tar something 
that REXEC from VM (z/VM 5.2) isn't.

Time for a beer.

Tom Duerbusch
THD Consulting

>>> Carsten Otte  7/28/2011 8:41 AM >>>
Hi Tom,

if I recall correctly, we reserve the tape on open, and free it on closing
the file descriptor. You shout be able
to find out which process is using it via "fuser /dev/ntibmX".

with kind regards
Carsten Otte
IBM Linux Technology Center / Boeblingen lab
--
omnis enim res, quae dando non deficit, dum habetur et non datur, nondum
habetur, quomodo habenda est

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: tar with tape drives

2011-07-29 Thread Tom Duerbusch
I found out what was happening.

Summary:
We had a problem where tar seemed to be keeping a tape drive for somewhere 
between 5 and 20 minutes after it completed.  But this only happened when using 
REXEC from VSE.  REXEC from VM or an interactive tar from a telnet session, did 
not show this behavior.

While having a vmstat 10 1000 running on the machine that the tar was running 
on, I discovered the actual problem.  The rexecd server in Linux (SLES 11 SP 
1), was returning from the tar command before the command had completed.  

So, why the tape drive showed it was still in use, well the tar was still 
running.
REXECD came back to VSE (a batch job) and then tried issuing additional 
commands, which failed due to the tape drive was still in use.

Additionally, I was producing a listing of files that were being backed up.  
When I redirected the list of files to a disk file, the tar didn't produce any 
output once it started.  Something...i.e. REXECD perhaps decided that the 
command did finish without any output and returned back to the client.

When I did the same tar command and had the list of files being backed up sent 
back to VSE, i.e. a very active connection, REXECD didn't return back to the 
client until the tar was finished.

Weird type of thing.

I'm not really up on pipes and Linux, but in the VM world, I can pipe output to 
the console and to a file without blocking the stage.  Is there something 
similar in the Linux world so I can keep the listing and keep the connection 
active?

Thanks

Tom Duerbusch
THD Consulting

>>> Carsten Otte  7/28/2011 8:41 AM >>>
Hi Tom,

if I recall correctly, we reserve the tape on open, and free it on closing
the file descriptor. You shout be able
to find out which process is using it via "fuser /dev/ntibmX".

with kind regards
Carsten Otte
IBM Linux Technology Center / Boeblingen lab
--
omnis enim res, quae dando non deficit, dum habetur et non datur, nondum
habetur, quomodo habenda est

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: tar with tape drives

2011-07-27 Thread Tom Duerbusch
Upon further testing

When I issue the tar command from telnet, the tape drive becomes UNUSED as soon 
as the tar command completes.

When I issue the tar command from CMS using REXEC, the tape drive becomes 
UNUSED as soon as the tar command completes.

However, when I issue the tar command from a VSE Batch job using REXEC, the 
drive remains IN-USE for a long period of time after the VSE job terminates.  
That is really unexpected.

Now for some further research

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


tar with tape drives

2011-07-27 Thread Tom Duerbusch
I'm doing a tar to a 3590 tape drive.  
Sample command is:

tar -b 64 -vv -pcf /dev/ntibm0 /home > /home/tarlisting/bkup.lst

When it finishes, tar doesn't free up the tape drive.

lstape shows that the drive is still in use:

linux74:/home # lstape
FICON/ESCON tapes (found 1):
TapeNo  BusID  CuType/Model DevType/Model   BlkSize State   Op  MedState
0   0.0.05b0   3590/50  3590/11 autoIN_USE  WRI LOADED

SCSI tape devices (found 0):
Generic DeviceTarget   Vendor   ModelType State

Which then causes other commands to fail, such as:

linux74:/home # mtst -f /dev/ntibm0 rewind
/dev/ntibm0: Device or resource busy

Eventually, longer than 5 minutes but less then 30 minutes, the IN_USE state 
drops back to UNUSED and I can go back to issuing tape commands.

I've been reading the tar INFO manual looking to see if tar will hold a drive 
for a period of time, perhaps to be able to append another tar image or 
something like that.

I haven't found anything yet.

So is there something else I need to be looking at?
I can solve this by looping a mtst within a script until the return code equals 
0, but I would much rather have my tape drive back immediately.

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


VM to zLinux Remote Execution

2011-07-22 Thread Tom Duerbusch
I'm trying to remotely execute a command with CMS as the client and SLES 11 SP 
1 as the server.

All documentation I've found so far, shows how to do it from Linux to VM.

Apparently the problem is, TCPIP for VM only has the unsecured REXEC client and 
SLES 11 only has a secured sshd.

I've searched the VM download page for a ssh client.
I've done some Linux searches for how to dumb down sshd (i.e. to allow 
unsecured transfers).

Of course, there might be program products available, but unless they would be 
zero cost products, it's not going to happen in the short term.

Thanks for any help

Tom Duerbusch
THD Consulting
(Still on z/VM 5.2)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Linux backups and restores to/from tape

2011-07-08 Thread Tom Duerbusch
The poor mans way is to use "tar" and output to tape.
Pipe the listing to a file and keep the file around so you know what is on that 
tar file.

You can restore any member(s) and, if needed, put the restored members in a 
different directory.

In any case, you need a logical backup of your files.  

Tom Duerbusch
THD Consulting

>>> "Frederick, Michael"  7/8/2011 10:23 AM >>>
Hi all,

A question came up about getting an older version of a file on a Linux disk, 
which I was able to do by restoring the DASD that held the file in question to 
a temporary disk and then they could do whatever they liked with the file, easy 
enough.  It got me to thinking about, what would happen if this were to take 
place on an LVM?  Having a dasd-level backup is likely to be of limited use in 
this case, because you'd more than likely have to restore the entire LVM to a 
separate set of disks just to get at that one file.

So does anyone know of a solution (free being better) that would do a 
file-level backup for zLinux to a tape?  Or has anyone dealt with this problem 
before and had some other way around it?

Thanks in advance,

Mike Frederick

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: CMS commands from Linux

2011-06-22 Thread Tom Duerbusch
If you are on the same mainframe, i.e. it doesn't have to go out on the
network, what was wrong with using REXEC to remote execute CMS
commands?

VM supports it.
I don't know if Linux does (or has a RPM for it).

Tom Duerbusch
THD Consulting

>>> Scott Rohling  6/21/2011 8:44 AM >>>
I built a provisioning server this way once..   on the Linux side -
just a
'wget' and pass a bunch of parms:

wget
http://x.x.x.x/cgi-bin/makelnx?lnx001+WEBAPP+1G+4G+VSWITCH2+..
..

Running Webshare on z/VM and the 'makelnx' CGI was a REXX EXEC that did
all
the dirmaint/racf/cloning stuff.. and wrapped it's output in simple
html
tags (so the Linux side could examine for success/failure and keep
logs
using the wget output).

Only trouble here is security...  how to ensure the function can only
be
used by 'authorized' requesters.  Webshare doesn't normally qualify as
a
secure interface.

If what the EXEC does is benign - then maybe it's no big deal...  it
can be
a great way to easily make z/VM application output available to Linux.

Scott Rohling

On Tue, Jun 21, 2011 at 7:24 AM, Agblad Tore 
wrote:

> setup a webserver in z/VM, and allow the Linux to run a rexx inside
the
> webserver via http :)
>
> Cordialement / Vriendelijke Groeten / Best Regards / Med Vänliga
Hälsningar
>  Tore Agblad
> 
> Tore Agblad
> System programmer, Volvo IT certified IT Architect
> Volvo Information Technology
> Infrastructure Mainframe Design & Development, Linux servers
> Dept 4352  DA1S
> SE-405 08, Gothenburg  Sweden
> Telephone: +46-31-3233569
> E-mail: tore.agb...@volvo.com 
> http://www.volvo.com/volvoit/global/en-gb/ 
> 
> From: Linux on 390 Port [LINUX-390@VM.MARIST.EDU] On Behalf Of van
> Sleeuwen, Berry [berry.vansleeu...@atosorigin.com] 
> Sent: Friday, June 10, 2011 16:56
> To: LINUX-390@VM.MARIST.EDU 
> Subject: Re: CMS commands from Linux
>
> Hi Fabio,
>
> Once you are running linux you can't execute CMS commands. This is
because
> you're not running CMS anymore. So if you need to run a rexx you need
to
> rewrite it to a bash script. And obviously you can't have CMS
commands in
> the bash script.
>
> Regards, Berry.
>
> > -Original Message-
> > From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf
Of
> > Fábio Paim
> > Sent: vrijdag 10 juni 2011 16:41
> > To: LINUX-390@VM.MARIST.EDU 
> > Subject: CMS commands from Linux
> >
> > Hi,
> >
> > I have a script rexx in zVM and I need execute it from Linux, How
is this
> > possible? How I can send commands CMS from linux (or send Linux
> > commands from z/VM) , I know the command "vmcp", but it only for
> > command CP.
> >
> > Thanks
> >
> >
> > Fábio Paim
> > Analista de sistemas
> >
> >
--
> > For LINUX-390 subscribe / signoff / archive access instructions,
send
> email to
> > lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390 
> >
--
> > For more information on Linux on System z, visit
> http://wiki.linuxvm.org/ 
>
>
--
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
>
--
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/ 
>
>
--
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
>
--
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/ 
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Set Share Relative

2011-06-15 Thread Tom Duerbusch
Am I the first?

That is what we were taught with 390 systems.
You also didn't want your communications region to be paged out (set reserved 
for those).
You always gave communications very high priority.  You didn't want your 3270 
traffic waiting.  If, due to resources, you needed to wait, you waited in CICS 
or DB2, both of which used variable amount of resources for any transaction.  
But 3270 traffic, same old, same old, day to day.  Get it processed as fast as 
you can.

Obviously, FTP thru a monkey wrench if the stack that serviced FTP was high 
priority.
Direct printing wasn't a problem with a high priority stack, but now that all 
printing is buffered, you don't want printing to be on a high priority stack 
either.

The paper and your point seems to only be for Linux systems.  And they are 
unique things compared to historic 390 processing.  

It would be interesting to see the same experiment done with multiple VSE 
systems.  

Anyway, I think the point is that Linux isn't a traditional 390 Operation 
System.  And you can't take they things you know and apply them to Linux with a 
near 100% success rate.

Using my, home grown performance tool, over the span of months, I see my 16 VSE 
systems taking resources just like we were taught (200 is twice as much as 
100).  But my IFL side, not that we are normally busy enough for relative 
shares to take effect, just didn't seem to match up with the relative shares 
defined.  The paper did explain things.

OK, so it didn't make sense, but that is why a lot of us thing "relative 
shares" the way we do.

Tom Duerbusch
THD Consulting


>>> Rob van der Heij  6/14/2011 3:56 PM >>>
On Tue, Jun 14, 2011 at 7:57 PM, Scott Rohling  wrote:

As for "wrong" - I thought that was beaten to death already. I buy an
adult beverage for the first* who can explain why it makes sense to
have the TCPIP stacks with a relative share 30 times higher than their
Linux production guests.

Rob
-- 
Rob van der Heij
Velocity Software
http://www.velocitysoftware.com/ 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LVM on CKD?

2011-06-08 Thread Tom Duerbusch
Lot's of us/most of us, have LVM with CKD devices.  Never been a problem.

First question, do you have CKD devices available?
i.e. in your VM directory, are the disks defined to Linux defined like:

MDISK 0150 3390 0001 10016 L1305F MR 
MINIOPT NOMDC
MDISK 0160 3390 0001 10016 L13063 MR 
MINIOPT NOMDC
MDISK 0161 3390 0001 10016 L13195 MR 
MINIOPT NOMDC
MDISK 0162 3390 0001 03338 L1361C MR 
MINIOPT NOMDC

Note the 3390 device type.

My first guess, is that you have native devices defined in the directory and 
Linux is trying to correct your configuration to match reality.

Otherwise, need more info...

Tom Duerbusch
THD Consulting

>>> Craig Collins  6/8/2011 3:00 PM >>>
Is anyone using an LVM configuration with CKD or EDEV devices?  We're trying
to install SLES11 SP1 on EDEVs and wanted to setup a data LUN using LVM but
YaSt keeps changing the type to native linux when we set it to LVM.  Not
sure if we are doing something wrong or LVM is not supported on these types
of devices.

Craig Collins
State of WI, DOA, DET

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: z/Linux Oracle question

2011-05-25 Thread Tom Duerbusch
It all depends on what they are trying to door if they are blindly 
following recommendations for the PC world.  Or, for that matter, they know 
they have 30 GB of disk space on a current server (and not telling you how much 
is used).  

On my Oracle test systems, I have:

1.  3390-9 for Linux and Oracle software.  Used 2.4 GB
2.  (2) 3390-9 in a LVM for Oracle data.Used 13 GB
3.  (2) VDISK for swap.  300 MB and 600 MB.  Max used about 15%.

LVM is used as it is very easy to add volumes and let Oracle expand the 
tablespaces.

The real question is how much data are they planning on putting on the test 
database?


Tom Duerbusch
THD Consulting

>>> Henry E Schaffer  5/25/2011 1:37 PM >>>
I'm just wondering about quantities of disk space that concern people.
Is 30GB noticeable - or in the noise?

--henry schaffer

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Samba Authorization another question

2011-04-27 Thread Tom Duerbusch
It's not a matter or like or not, just rather not .

Yep, I've looked at it.
It does what you say, but

Unlike Novell/Windows saying your password is going to expire, the Linux/Samba 
side doesn't say anything.
When the Novell/Windows password is changed, the next time the user tries to 
use Samba (first time clicking on the Samba drive letter for that Windows 
session), they are challenged with a logon prompt, in which they are not use to 
seeing.  This doesn't give you a clue that your password needs to be changed.

So that means lots of calls to the help desk.  I think we enforce password 
changes every 90 days.  And then they forget their password, which means the 
help desk needs to sign on via SWAT with an admin user to change their password.

Only talking about 100 or so users.  

Right now, I'm tempted to go with a fixed, never expire password, that they 
have to give every new Windows session (normally after every boot), except that 
would go against our security policy.  Any ramifications to that, comes back to 
me.

The open systems person that set up all the PC security left for greener 
pastures.  There is one or two people trying to grasp how the current stuff 
works.  They are resistant to putting something new in the mix.  Eventually, 
they will get over it, but right now, it is holding up this project.

Tom Duerbusch
THD Consulting

>>> "Dean, David (I/S)"  4/27/2011 10:55 AM >>>
You won't like this, but,... in swat there is a password icon that we have our 
users access to change passwords whenever their Windows pw changes.  It's easy 
enough for end users.  I "believe" you can even configure this so the end user 
sees only the password icon, but not sure.

-Original Message-----
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Tom 
Duerbusch
Sent: Wednesday, April 27, 2011 11:13 AM
To: LINUX-390@VM.MARIST.EDU 
Subject: Samba Authorization another question

Of all my samba servers, this is the first attempt at being something that end 
users would directly interface with, just like they do with our Novell file 
servers.

I've tried to force myself to use SWAT, but until I get everything working from 
the command line, SWAT is just going to have to wait.

I'm not a Windows or PC security type, in any way, shape, or form, and that, I 
think, is my problem

What I would really like to do, is if a Windows user tries to access a Samba 
share, and the username and the share name, match, you are good to go.  If the 
Windows user doesn't match the share, reject.

Right now, I can do this, if I maintain the SMBPASSWD file.
If I put their current windows password in there, no problem.

However, 
1.  I won't know the end users current password.
2.  We force password changes and I won't know their new passwords either.

If the passwords don't match, a window is displayed asking for their samba 
password.  When entered, everything is good to go.  I would like to get away 
from the users having to enter in this password.

If I disable the password checking, the end users can mount any other users 
directory.  That's not good either.

I have "passdb backend = tdbsam" specified.

I don't really need any fancy authorization, just if it the user is the same as 
the share, you're authorized.

The manuals on migrating windows servers to Samba seem to be really overkill 
for what we need, but that just may be what needs to be done.

SLES11 SP1 with Samba 3.5

Any simple solutions to this problem?

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/ 
-
Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Samba Authorization another question

2011-04-27 Thread Tom Duerbusch
Of all my samba servers, this is the first attempt at being something that end 
users would directly interface with, just like they do with our Novell file 
servers.

I've tried to force myself to use SWAT, but until I get everything working from 
the command line, SWAT is just going to have to wait.

I'm not a Windows or PC security type, in any way, shape, or form, and that, I 
think, is my problem

What I would really like to do, is if a Windows user tries to access a Samba 
share, and the username and the share name, match, you are good to go.  If the 
Windows user doesn't match the share, reject.

Right now, I can do this, if I maintain the SMBPASSWD file.
If I put their current windows password in there, no problem.

However, 
1.  I won't know the end users current password.
2.  We force password changes and I won't know their new passwords either.

If the passwords don't match, a window is displayed asking for their samba 
password.  When entered, everything is good to go.  I would like to get away 
from the users having to enter in this password.

If I disable the password checking, the end users can mount any other users 
directory.  That's not good either.

I have "passdb backend = tdbsam" specified.

I don't really need any fancy authorization, just if it the user is the same as 
the share, you're authorized.

The manuals on migrating windows servers to Samba seem to be really overkill 
for what we need, but that just may be what needs to be done.

SLES11 SP1 with Samba 3.5

Any simple solutions to this problem?

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Samba authorization

2011-04-27 Thread Tom Duerbusch
So far, all the responses are about the SMB.CONF file.

Just a reminder, that, not only you have to put all the users in the same group 
for this to work, but when you create the directory structure, I believe that 
you also have to do a chmod 770 for that subdirectory for the group authority.

And you may have to do a chuser to specify a common userid (but I'm not clear 
on that one).

I have another samba authorization question, that I will bring up in another 
thread.

Tom Duerbusch
THD Consulting

>>> "Dean, David (I/S)"  4/27/2011 9:13 AM >>>
And take the space out of the directory name.

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Richard 
Gasiorowski
Sent: Wednesday, April 27, 2011 8:59 AM
To: LINUX-390@VM.MARIST.EDU 
Subject: Re: Samba authorization

Most likely the parent directory is 740

Richard (Gaz) Gasiorowski
Solution Architect
CSC
3170 Fairview Park Dr., Falls Church, VA 22042
845-889-8533|Work|845-392-7889 Cell|rgasi...@csc.com|www.csc.com 




This is a PRIVATE message. If you are not the intended recipient, please
delete without copying and kindly advise us by e-mail of the mistake in
delivery.
NOTE: Regardless of content, this e-mail shall not operate to bind CSC to
any order or other contract unless pursuant to explicit written agreement
or government initiative expressly permitting the use of e-mail for such
purpose.



From:
"van Sleeuwen, Berry" 
To:
LINUX-390@vm.marist.edu 
Date:
04/27/2011 08:55 AM
Subject:
Samba authorization



Hi Listers,

On SLES10 SP2 we are running a samba. We would like to have shares for
specific groups. For instance we have a group zlinux that would get access
into the "PL Linux" directory and they must be able to read and write into
this directory. Several resources on the net give us the configuration for
this but so far we were not able to get the results we expected.

The /etc/smb.conf contains:
[PL Linux]
# Identification
comment = PL Linux

# Management
path= /srv/smb/shares/MFPL/PL Linux
writable= yes
force group = zlinux
valid users = @zlinux

# Access set up
create mask  = 0770
directory mask   = 0770
force create mode= 0660
force directory mode = 0770

According to the config a new file should be created with group zlinux and
mask 770. But when we create a document we see:

-rwxr--r-- 1 nl12237 users 0 Apr 27 13:45 New (10) Text Document.txt

So, in the group users and mask 744.

What can we do to get the samba to create the file under group zlinux and
with 770 instead of 744?

Met vriendelijke groet/With kind regards,
Berry van Sleeuwen
Flight Forum 3000 5657 EW Eindhoven
* +31 (0)6 22564276


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/ 



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/ 
-
Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: multipl cpu's

2011-04-25 Thread Tom Duerbusch
The generic rules of thumb:

1.  Don't define more virtual cpus than you have real available.
2.  Don't define more virtual cpus than you need.

With Websphere:
Two cpus or more are best.  Websphere has a process, in which two tasks talk to 
each other.  If each process has a cpu then they don't have to steal the 
processor from one another.  I think this hint came out in the SLES8 or SLES9 
days.  That is the timeframe that I downloaded Websphere and had an official 
"interest" in it...i.e. there was a project coming up.   Given that was a 
while back, things may have changed.

Tom Duerbusch
THD Consulting

>>> Mark Post  4/25/2011 12:09 PM >>>
>>> On 4/25/2011 at 09:34 AM, "Dean, David (I/S)"  wrote: 
> Can some of you weigh in on the merits (demerits) of defining multiple CPU's 
> to virtual Linux boxes?  We are heavy WebSphere and have gotten differing 
> opinions.

The typical advice is that if you are driving a single CPU over 80%, then a 
second one would likely be useful.  Otherwise, you just add to z/VM's workload 
to schedule that virtual CPU with no benefit to the guest.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: CMM

2011-04-20 Thread Tom Duerbusch
Did you try "Developer Works"?

Tom Duerbusch
THD Consulting

>>> "Dean, David (I/S)"  4/20/2011 3:07 PM >>>
Project 1.  VDISK implemented  complete
Project 2.  CMM

Will someone recommend a BASIC how-to guide for CMM on zlinux.  I have googled 
and found many docs saying how great it is, but not how to do it.
I do not want the vmrm piece to make my decisions (yet).  I need to implement 
so I can experiment with manual changes.
KISS for now.

David M. Dean
Information Systems
BlueCross BlueShield Tennnessee

-
Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


POC Tape drives for Linux

2011-04-13 Thread Tom Duerbusch
I've been in a management meeting discussing tape backup for zLinux  (about 
time).

I agreed to the following and now time to see if it is doable.

As a Proof of Concept:

1.  Take one of our IBM 3590 E11 drives, detach it from our current Controller.
2.  Attach the drive to a fibre channel (was FICON but reconfigured as FCP).

Access the tape drive for backup/restore purposes from multiple Linux images 
(we will manually make sure that only one image is actively using the drive at 
any one time).

When we are satisfied that all of this works, put everything back the way it 
was.

This will be done over one or more weekends.

My impression is that there isn't different VSE/Linux tape drives.  VSE 
accesses the drive via a controller that has multiple 3590 drives attached to 
it.  And that a 3590 drive can be directly attached via FCP to the Linux side.

So, did I open my mouth an insert foot?

We are getting conflicting info on whether a SCSI switch is required, or just a 
nice thing to have.

Phase 2 will have us acquiring a couple IBM 3590 E11 drives (autoloaders, if 
usable in Linux, is a good thing).

Phase 3 may involve adding a switch for easier management of the tape drives.

I don't dare think we would ever get to a Phase 4 (some sort of backup 
product/tape manager), but if some one get some grant money....

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Backing up SLES 11 SP1 and Oracle

2011-04-06 Thread Tom Duerbusch
BTW, the main manual involved with FTP and tape processing (in case you didn't 
know):

TCP/IP for VSE User Guide: Errata
located on their website.

Tom Duerbusch
THD Consulting

>>> David Stuart  4/6/2011 3:49 PM >>>
Thanks Tom, 

So, you defined the NFS Server on the same Linux image as the DB?  

And I was wondering how you used your z/VSE System to back it up...  

Thanks for the ideas. 


Dave 






Dave Stuart
Prin. Info. Systems Support Analyst
County of Ventura, CA
805-662-6731
david.stu...@ventura.org>>> "Tom Duerbusch"  
4/6/2011 1:37 PM >>>
Weekly, we take a backup for disaster recovery.

Take the image down.
Flashcopy the disks (or backup from z/OS or z/VSE, as you don't have z/VM)
Bring image up.

For the logical backups.
I defined an NFS server which the Linux image mounts.
Nightly, I take an Oracle Backup
  Using OEM:
  Maintenance tab
 Backup/Recovery
Schedule Backup  (I make it reoccurring at 3 AM).
I tell it to use a directory on the mounted NFS space.
Then, during the day, when we have an Operator present.
Then move a 3590 drive over to the IFL.
>From another Linux image (but could be your Oracle image also)
They sign on using PUTTY, and do a "tar" to the tape drive.

The problem you are going to have with 3480 drives, is "tar" isn't too friendly 
with multiple volsers.  If you ever hit end of volume with "tar", it will 
cancel.  However, you can tell "tar" that a volume is xxx MB in size.  At the 
end of that size, it will unload the volume and call for another one.  I went 
thru the 3480 process once, back in 2003 and started using the 3590 drives 
after that .

You're not the only one with 3480 drives, we have them, along with 3420 drives 
also.

I have been successful in taking files on the NFS server and FTPing to a tape 
drive on VSE.  This used the CSI stack and allowed us to use DYNAM tapes.

If you can afford to take your Oracle image down, perhaps frequently, or you 
can afford lots of disk space, you can replace the NFS server with a LVM setup 
on your Oracle image, just for backups.  Having separate images is nice as it 
allows you to take the non-Oracle images down and make changes.  But if you 
only have a single LPAR for Linux and no VM, you can do everything in one 
server.

I tried attaching the manuals but the listserv rejected their size.
So the manuals are:

b10735 Oracle Backup and Recovery Basics
b10734 Oracle Backup and Recoverey Advanced users Guide

Tom Duerbusch
THD Consulting


>>> David Stuart  4/6/2011 1:12 PM >>>
Morning all, 

New linux admin here.  

I have an LPAR (yes, an LPAR, no z/VM) running SLES 11 SP1 and Oracle 10g r2.  

I have yet to back that up.  I know, shame on me!  Preferably to tape.  I have 
been looking and have found posts referencing mt_st and lintape.  So, I am 
looking for advice on the best way to accomplish the backup.   

My available tape drives are IBM 3480 (yes, you read that correctly!).  They 
are not directly attached to the Linux LPAR, but have to be 'switched' via the 
HMC.  Which isn't a big deal, I am very familiar with doing this.  

I am looking for advice on which software package(s) I need on SLES 11 SP 1 to 
be able to detect, and back up to, my 3480's.  Total data, right now, would be 
10 - 12 GB (the oracle DB has not yet been populated).  

Also, anything special I need to do to have the SLES 11 detect that the tape 
drives are now there?  And to 'detach' them when the backup is complete?  

Anyone know if Oracle can 'back up' to these drives?  Or do I need to have 
Oracle back up to disk first, and then back that data up to the 3480?  I'm 
having trouble finding this info int he Oracle manuals.  And if Oracle backs up 
to disk, such as an lvm, can I back that up directly, or is there something 
else I need to do, first.  

Inquiring minds want to know, now that I have time to 'play' with this system 
some more.  


TIA, 
Dave 





Dave Stuart
Prin. Info. Systems Support Analyst
County of Ventura, CA
805-662-6731
david.stu...@ventura.org 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/ 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linu

Re: Backing up SLES 11 SP1 and Oracle

2011-04-06 Thread Tom Duerbusch
Can you define a NFS server on the same imageyour can...
You can define a lot of things on the same image...but...

Linux memory management isn't as strong as VM, VSE or MVS...so one
application can cause real problems for others.  But it is doable.

No, I didn't define a NFS server on the same image.  You can define a
second LVM on the same image.  In this case, the first LVM is for Oracle
(if you don't need top performance, LVM for Oracle tablespaces reduces
management quite a bit) and the second LVM for Oracle backups.  In my
case, I defined the LVM on a NFS server, and then mounted space on the
NFS on the Oracle image.

How did I use z/VSE to backup it up?
Wellat first, I cheated.

I ran a small tape job on VSE to create the dataset (open, write,
close) for Dynam file VT.TAR.3420.ARCH

I then varied off the drive and varied it on the Linux LPAR.

Here are my notes on what I did then:

SUSE 11 Tar Tape Processing

*** must install mt_st RPM to use hardware compression on the IBM 3590
drives.
*** provides mtst support
vary on 5b1 (vary it off on the 390 side first)
attach 5b1 linux72

Logon as:  root

lstape
FICON/ESCON tapes (found 1):
TapeNo  BusID  CuType/Model DevType/Model   BlkSize State   Op 
MedState
N/A 0.0.05b1   3590/50  3590/11 N/A OFFLINE ---
N/A

SCSI tape devices (found 0):
Generic DeviceTarget   Vendor   ModelType
State
  
chccwdev -e 0.0.05b1
Setting device 0.0.05b1 online
Done

lstape
FICON/ESCON tapes (found 1):
TapeNo  BusID  CuType/Model DevType/Model   BlkSize State   Op 
MedState
0   0.0.05b1   3590/50  3590/11 autoUNUSED  ---
LOADED

SCSI tape devices (found 0):
Generic DeviceTarget   Vendor   ModelType
State

dir /sys/class/tape390
total 0
lrwxrwxrwx 1 root root 0 Jan 19 16:18 ntibm0 ->
../../devices/css0/0.0.0014/0.0.05b1/tape390/ntibm0
lrwxrwxrwx 1 root root 0 Jan 19 16:18 rtibm0 ->
../../devices/css0/0.0.0014/0.0.05b1/tape390/rtibm0

mtst -f /dev/ntibm0 rewind
mtst -f /dev/ntibm0 fsf 1
mtst -f /dev/ntibm0 compression 1
tar -b 64 -v -pcf /dev/ntibm0 *.3420  (produce tar file on tape)
  -b 64 (block at 64 512 byte blocks i.e. 32k)
  -v show files as they are being processed
  -p
  -c
  -f  file to send tar to (/dev/ntibm0)
mtst -f /dev/ntibm0 eof 2
 

mtst -f /dev/ntibm0 rewind
mtst -f /dev/ntibm0 fsf 1
tar -b 64 -tvf /dev/ntibm0 (list tar contents from tape
tar -b 64 -xf /dev/ntibm0 204344.3420 (retrieve single file to current
directory)
All 3420 tapes takes about 14GB
With hardware compression, that is about 25% of the tape used.

(don’t need to gzip…hardware compression)


Yea, Yea, Yea...I know.I'm using root.  Bad...booohiss.

As far as doing it without cheating.

I got half way thru.
You know that with the CSI stack, you can ftp to/from a tape.  Good to
have it mounted first else the FTP will timeout.  I have sent a tar
file, as binary to the tape drive.  

Then I had to work on production stuff and haven't got back to it.

What still needs to be done:
I need to ftp get the tar from the tape to another filename.
Do a tar to list the contents of the tarfile (validate nothing got
changed during transmission).
Perhaps extact the files, to another directory and compare the results
to the original directory.

Tom Duerbusch
THD Consulting

>>> David Stuart  4/6/2011 3:49 PM >>>
Thanks Tom, 

So, you defined the NFS Server on the same Linux image as the DB?  

And I was wondering how you used your z/VSE System to back it up...  

Thanks for the ideas. 


Dave 






Dave Stuart
Prin. Info. Systems Support Analyst
County of Ventura, CA
805-662-6731
david.stu...@ventura.org>>> "Tom Duerbusch"
 4/6/2011 1:37 PM >>>
Weekly, we take a backup for disaster recovery.

Take the image down.
Flashcopy the disks (or backup from z/OS or z/VSE, as you don't have
z/VM)
Bring image up.

For the logical backups.
I defined an NFS server which the Linux image mounts.
Nightly, I take an Oracle Backup
  Using OEM:
  Maintenance tab
 Backup/Recovery
Schedule Backup  (I make it reoccurring at 3 AM).
I tell it to use a directory on the mounted NFS space.
Then, during the day, when we have an Operator present.
Then move a 3590 drive over to the IFL.
>From another Linux image (but could be your Oracle image also)
They sign on using PUTTY, and do a "tar" to the tape drive.

The problem you are going to have with 3480 drives, is "tar" isn't too
friendly with multiple volsers.  If you ever hit end of volume with
"tar", it will cancel.  However, you can tell "tar" that a volume is xxx
MB in size.  At the end of that size, it will unload the volume and call
for another one.  I went thru the 3480 process once, back in 2003 and
started using the 3590 drives after that .

You're not the only one

Re: Backing up SLES 11 SP1 and Oracle

2011-04-06 Thread Tom Duerbusch
Weekly, we take a backup for disaster recovery.

Take the image down.
Flashcopy the disks (or backup from z/OS or z/VSE, as you don't have z/VM)
Bring image up.

For the logical backups.
I defined an NFS server which the Linux image mounts.
Nightly, I take an Oracle Backup
  Using OEM:
  Maintenance tab
 Backup/Recovery
Schedule Backup  (I make it reoccurring at 3 AM).
I tell it to use a directory on the mounted NFS space.
Then, during the day, when we have an Operator present.
Then move a 3590 drive over to the IFL.
>From another Linux image (but could be your Oracle image also)
They sign on using PUTTY, and do a "tar" to the tape drive.

The problem you are going to have with 3480 drives, is "tar" isn't too friendly 
with multiple volsers.  If you ever hit end of volume with "tar", it will 
cancel.  However, you can tell "tar" that a volume is xxx MB in size.  At the 
end of that size, it will unload the volume and call for another one.  I went 
thru the 3480 process once, back in 2003 and started using the 3590 drives 
after that .

You're not the only one with 3480 drives, we have them, along with 3420 drives 
also.

I have been successful in taking files on the NFS server and FTPing to a tape 
drive on VSE.  This used the CSI stack and allowed us to use DYNAM tapes.

If you can afford to take your Oracle image down, perhaps frequently, or you 
can afford lots of disk space, you can replace the NFS server with a LVM setup 
on your Oracle image, just for backups.  Having separate images is nice as it 
allows you to take the non-Oracle images down and make changes.  But if you 
only have a single LPAR for Linux and no VM, you can do everything in one 
server.

I tried attaching the manuals but the listserv rejected their size.
So the manuals are:

b10735 Oracle Backup and Recovery Basics
b10734 Oracle Backup and Recoverey Advanced users Guide

Tom Duerbusch
THD Consulting


>>> David Stuart  4/6/2011 1:12 PM >>>
Morning all, 

New linux admin here.  

I have an LPAR (yes, an LPAR, no z/VM) running SLES 11 SP1 and Oracle 10g r2.  

I have yet to back that up.  I know, shame on me!  Preferably to tape.  I have 
been looking and have found posts referencing mt_st and lintape.  So, I am 
looking for advice on the best way to accomplish the backup.   

My available tape drives are IBM 3480 (yes, you read that correctly!).  They 
are not directly attached to the Linux LPAR, but have to be 'switched' via the 
HMC.  Which isn't a big deal, I am very familiar with doing this.  

I am looking for advice on which software package(s) I need on SLES 11 SP 1 to 
be able to detect, and back up to, my 3480's.  Total data, right now, would be 
10 - 12 GB (the oracle DB has not yet been populated).  

Also, anything special I need to do to have the SLES 11 detect that the tape 
drives are now there?  And to 'detach' them when the backup is complete?  

Anyone know if Oracle can 'back up' to these drives?  Or do I need to have 
Oracle back up to disk first, and then back that data up to the 3480?  I'm 
having trouble finding this info int he Oracle manuals.  And if Oracle backs up 
to disk, such as an lvm, can I back that up directly, or is there something 
else I need to do, first.  

Inquiring minds want to know, now that I have time to 'play' with this system 
some more.  


TIA, 
Dave 





Dave Stuart
Prin. Info. Systems Support Analyst
County of Ventura, CA
805-662-6731
david.stu...@ventura.org 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Security question about having zLinux web servers out in DMZ.

2011-03-30 Thread Tom Duerbusch
Not a valid restriction.

Open Systems and Network types only run a single stack in a box (vast majority 
of the time).
Here, they still can't grasp that I have some 70 stacks running on a single 
box.  (but there is only 2 ethernet cables...so you can't have 70 stacks)

>From their viewpoint, treat each stack as a standalone box when dealing with 
>them.  Just like standalone boxes, if you have one stack routing to another 
>stack, it is the same as one box being routed to another box.

If some stacks need to be passed thru the DMZ to the outside world, just 
identify the IP addresses envolved.

Tom Duerbusch
THD Consulting

>>> Ron Foster at Baldor-IS  3/30/2011 10:56 AM >>>
Hello listers,

Our company has recently been acquired by another company.  We are at
the point of having to get our two networks to talk to each other.
Before we can do that, we have to comply with certain security rules.
One of them being that the mainframe cannot be exposed to the internet.

We have a couple of zLinux web servers that are running in a couple of
z/VM guests that are connected to our DMZ.  The new folks say this is a
show stopper as far as hooking up the two networks.

The questions I have are:

Is this a common restriction?  That is, you have to have your DMZ based
web servers running on some other platform so that your mainframe is not
exposed to the internet.

Or, the new folks just don't understand the built-in security provided
by the z10 and z\VM 6.1.

I know that we will end up conforming to the rules that the new folks
have, but I was just wondering if the new folks really know what they
are talking about.

Thanks,
Ron

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Moving Oracle off zLinux boxes -- comments from the field?

2011-03-23 Thread Tom Duerbusch
I have Oracle 10g R 2 running on zLinux.  I really haven't had many problems 
with it.

But I always wondered
Is there an application related reason that they need the latest, greatest 
release of Oracle, some new bell or whistle or were they just reading the 
latest trade rag?

My gut says that the Open Systems types always needs the latest software in 
order to support the latest hardware, which changes frequently.  Just how often 
are there mainframe hardware changes made that an application, or middleware is 
aware of?  Hipersockets?  Dataspaces?  64 bit addressing?  Anything else?

Here, I can't get anyone interested in new releases.  They are very happy as 
is.  

About 5 years ago or so, I think it was Oracle that stated that not every 
release will be available for the mainframe.  However, they will support the 
mainframe releases longer than they normally do to keep in line with the 
mainframe methods.  

Not that I was watching, but I didn't hear about loads of mainframes replacing 
DB2/UDB 9.5 with DB2/UDB 9.7. 

At some point, I expect to have Oracle 11 up and running.  It would be 
concurrent with 10 g, most likely for years.  Easy with zLinux.  Sometimes that 
can be a bad thing also .

Tom Duerbusch
THD Consulting 
(of course I still have some SLES7 images running)

>>> "Simms, Michael"  3/23/2011 11:09 AM >>>
I suspect, I'm unfortunately not a fly on the wall, that Oracle simply wants to 
sell their newly acquired hardware 'division', Sun. But that is only 
speculation. This almost looks like a strong shove away from the hardware they 
can't handle, apparently. 

I can just see Jack Nickelson screaming to Larry, 'You can't handle the 
mainframe!!' (except for zO$, I guess it's $upported.)

Unfortunately, the application we bought must use Oracle, at least that's my 
understanding. So that kind of messes with using DB2 as a replacement. The 
application we are gearing up will be very heavily used, 60+ sites, all with 
very frequent updates.
So, we need the power of the mainframe IFLs and i/o system. But if Oracle 
doesn't get their support/certification in gear we may have to consider 
something less than what we were expecting and it would be similar to what is 
happening in James shop. We'll cross that road when we get to it.

-Original Message-
From: Linux on 390 Port on behalf of Christopher Cox
Sent: Wed 3/23/2011 11:11 AM
To: LINUX-390@VM.MARIST.EDU 
Subject: Re: Moving Oracle off zLinux boxes -- comments from the field?
 

Oracle is NOT supporting them well on zLinux.   So... there's both the
financial and technical reason.

Why would anyone stay with a platform that is not well supported?  Oracle
couldn't handle it, so they are moving.

Now... certainly the "false mindset" issue surrounding mainframes is an
issue... but I'd probably move Oracle too if they weren't willing to
address support issues in a timely manner.

Maybe it's time to change your database supplier??  You know, if if you
have to move it, I wonder if moving to something a bit more heavy duty,
like a IBM Power7 box was even considered...  If DB2 isn't an option, maybe
Oracle on Power7 would be a better fit (saying without knowledge of
Oracle's commitment of support there as well).




From:   Barton Robinson 
To: LINUX-390@vm.marist.edu 
Date:   03/18/2011 04:04 PM
Subject:Re: Moving Oracle off zLinux boxes -- comments from the field?
Sent by:Linux on 390 Port 



wow, your DBAs have the authority to spend that kind of money and make
that kind of change without management signature? So no financial
analysis, no technical reason, sounds religious.

CHAPLIN, JAMES (CTR) wrote:
> We just had a surprise announcement by one of the Oracle DBAs during a
> zLinux & Application group planning meeting at our worksite. The DBA
> advised us that they (Database group) were going to move/migrate all the
> Oracle databases that we have on zLinux boxes off to an intel/unix
> platform. He did not offer details of the hardware, or when or how, just
> that they were going to do it. This is a bite of a surprise as we have
> just moved our MQ off the Mainframe (zOS) to the zLinux platform (guests
> on zVM) and that move is doing well. This may be due in part to the
> false mindset that we have in our upper management at our site that
> Mainframes are old technology. Also we have had slow response from
> Oracle on resolving issues we have identify (certifying Oracle 11 on
> z390x architecture, getting Oracle 10 support for RHEL 5.0 on z390x
> architecture). Has anyone else on this list had any related "war
> stories" similar to what we may be about to experience as this move
> takes place?
>
>
>
> James Chaplin
>
> Systems Programmer, MVS, zVM & zLinux
>
>
-

Re: Where is kernel loaded in memory?

2011-03-18 Thread Tom Duerbusch
Is it possible to trigger a bogus transaction say, an hour before your first 
transaction is usually executed?

We had a similar problem with CICS back in the '90s.  The transaction did a 
call (a no no back in CICS 1.7) to an external routine which did a lot of 
paging and a lot of I/O to non-cached controllers.  The first time thru, the 
transaction would abend (AICA...runaway task timer).  After storage was loaded, 
it would run in a few seconds.

The cheapest/easiest solution was to trigger the transaction a couple times 
before users got on.

Is that possible with your application?

Tom Duerbusch
THD Consulting

>>> Mark Wheeler  3/17/2011 11:55 AM >>>
Bob,
 
No, the question was asked previously, but I chose to ignore it. For one, 
because it would take way too long to explain adequately, and also that the 
thread would quickly expand exponentially.
 
Quick answer: we have an app that sees elongated response times on the first 
transaction of the day. We have traced it back to several thousand synchronous 
pageins (because stuff got paged out overnight, and our page volumes aren't 
infinitely fast). All subsequent transactions run sub-second. I have no idea 
which pages are involved, but it was suggested that since they were synchronous 
pageins, it may involve the kernel. A POSSIBLE solution that crossed my mind 
would be to lock kernel pages in storage and see if that solved the problem. 
All I needed to conduct that little experiment was to know where the kernel 
lived in storage.
 
Again, I know most everyone who reads this will have the same obvious questions 
and suggestions, and I appreciate that. Alas, right now there isn't enough time 
or bandwidth to explain the situation in sufficient detail so as  to prevent 
this thread from blowing up in a hundred different directions. The suggestion 
to look at /proc/iomem answered my immediate question. As necessary, I'll toss 
further questions out to the list.
 
Thanks all!
 
Mark Wheeler
UnitedHealth Group 

 
> Date: Thu, 17 Mar 2011 11:30:27 -0600
> From: nix.rob...@mayo.edu 
> Subject: Re: Where is kernel loaded in memory?
> To: LINUX-390@VM.MARIST.EDU 
> 
> An obvious question that no one has bothered to ask as yet:
> 
> What is the problem you're trying to solve with this? Or, why do you want to
> know where the kernel loads, and what will you gain from it?
> 
> Too many times, users or other people (programmers, other sysadmins, ...)
> come to us with a solution in need of a piece or part, and we never hear the
> larger question or problem, to which there may be a much simpler answer.
> 
> The query may be a simple one, the need may be educational. Or it may be a
> cog in a larger, complex solution to a problem that some, or many of us have
> already solved in some other way which does not involve walking through the
> kernel's memory.
> 
> It's just a thought, but Mark -- What's your original problem or task?
> 
> --
> Robert P. Nix Mayo Foundation .~.
> RO-OC-1-18 200 First Street SW /V\
> 507-284-0844 Rochester, MN 55905 /( )\
> - ^^-^^
> "In theory, theory and practice are the same, but
> in practice, theory and practice are different."
> 
> 
> 
> On 3/17/11 5:59 AM, "Richard Troth"  wrote:
> 
> > Originally, the kernel loaded at "real" addr 64k. That is the default for
> > Linux on most platforms. But you could change that, and for 1M alignment,
> > some do so on S/390.
> >
> > Going with mapped memory, it sounds like absolute zero is the virtual pref
> > for kernel space. Cool. Easily handled in all virt mem platforms.
> >
> > -- R; <><
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/ 
  
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Db2 Connect Server

2011-03-01 Thread Tom Duerbusch
In my case (VSE), the mainframe documentation is in the PD for DB2/VSE.

Tom Duerbusch
THD Consulting
Sent via BlackBerry by AT&T

-Original Message-
From: "Dazzo, Matt" 
Sender: Linux on 390 Port 
Date: Tue, 1 Mar 2011 10:41:34 
To: 
Reply-To: Linux on 390 Port 
Subject: Re: Db2 Connect Server

Tom, I have DB2 Conn Server installed, I reviewed the post installation steps 
etc. Where are the next steps documented about configuring connections to say 
db2 instances on the MF? Checked the Db2 info center and but can't find where 
that stuff is? Kind of like where do I go next? Tks Matt

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Tom 
Duerbusch
Sent: Monday, February 28, 2011 1:51 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Db2 Connect Server

When I installed DB2 Connect, I used the VNC method of installing.

I did a "find / -name db2samp*.* from putty on my DB2 Connect Server.  The file 
wasn't found. I don't know what was used by the install process.

BTW, if you only have DB2 Connect and not DB2/UDB with it, would you still have 
a sample DB to install?  I'm thinking that the sample DB is for DB2/UDB type 
testing and if you are not licensed for DB2/UDB, could you still create it and 
what could it be used for?

Tom Duerbusch
THD Consulting


>>> "Dazzo, Matt"  2/28/2011 9:59 AM >>>
I'm a zvm/Linux nubie looking for some help with DB2 Conn sever on Linux. I 
have it installed and I am working on the post installation tasks, trying to 
create the sample database but it seems the command 'db2sampl' is nowhere to be 
found. According to the not so good db2 install book it should be in / 
$HOME/sqllib/bin but it's not there. I searched the entire db2 install path and 
there is no db2sampl. Has anyone else run into this?

The db2val tool run successfully and I can start the db manager.



Matt Dazzo
Sr. MVS Systems Admin
Publishers Clearing House
516-944-4816
mda...@pch.com 


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Db2 Connect Server

2011-02-28 Thread Tom Duerbusch
When I installed DB2 Connect, I used the VNC method of installing.

I did a "find / -name db2samp*.* from putty on my DB2 Connect Server.  The file 
wasn't found. I don't know what was used by the install process.

BTW, if you only have DB2 Connect and not DB2/UDB with it, would you still have 
a sample DB to install?  I'm thinking that the sample DB is for DB2/UDB type 
testing and if you are not licensed for DB2/UDB, could you still create it and 
what could it be used for?

Tom Duerbusch
THD Consulting


>>> "Dazzo, Matt"  2/28/2011 9:59 AM >>>
I'm a zvm/Linux nubie looking for some help with DB2 Conn sever on Linux. I 
have it installed and I am working on the post installation tasks, trying to 
create the sample database but it seems the command 'db2sampl' is nowhere to be 
found. According to the not so good db2 install book it should be in / 
$HOME/sqllib/bin but it's not there. I searched the entire db2 install path and 
there is no db2sampl. Has anyone else run into this?

The db2val tool run successfully and I can start the db manager.



Matt Dazzo
Sr. MVS Systems Admin
Publishers Clearing House
516-944-4816
mda...@pch.com 


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Gotcha's

2011-02-16 Thread Tom Duerbusch
To some extent these may or may not be "gotcha's".

Background.
IBM z/890 with 1 IFL with FICON attached dasd.
IFL has 4 GB memory.
Running Oracle 10g with OEM.
Also a variety of NFS servers, FTP servers, Samba servers, VSE Virtual Tape 
servers, DB2/UDB test machines, Web servers, DB2 Connect servers and some basic 
Linux machines.
Most of the servers are underutilized/idle.
We have 4 Oracle servers, a production one, a test one (same as production with 
data refreshed from production), development one (table definitions may change 
compared to the production one) and my DBA one (where I test things out before 
I impact the other images).

Based on the documentation from Oracle, in my case of Suse 10 SP 2 with Oracle 
10g R2, there may be some RPMs that are needed and changes to some parms (such 
as sysctl.conf, Limits.conf,  login, profile.local) are needed.  It is much 
easier if your first Oracle installs mirror the IBM Redbooks.  Of course that 
won't be the latest Oracle, nor the latest Linux, but then you see the process 
and the result.  If you train by using the latest Linux with the latest Oracle, 
you will have months of fun .

The production server has 300 users defined, and normally 95 active during 1st 
shift.
The database size is 7 GB.
The production database normally runs 20-25% of the z/890 IFL.

OEM (Oracle Enterprise Manager). 
Like most products, it doesn't understand virtualization.  The maximum CPU line 
on its graphs are bogus.  At times, when some batch type process is running, it 
show we are using 3.5 to 4 CPUs.  Instance disk I/Os shows past 40,000 per 
second vs my monitor shows perhaps 500 per second.

I use the graphs to give me a relative idea of what is happening.  When another 
machine drives the IFL to 100%, OEM shows we are having problems.  However, 
with the CP share of 1000 vs 100 for others, there is no real contention and 
users don't have any complaints.

Ooops.  Almost forgot.  I trade memory savings for a higher I/O rate.  This 
image is 750 MB.  (The other Oracle images are 600 MB).  To get the image size 
down that small, I had to make the SGA size about the smallest it can be (with 
OEM running) of 200MB, (140 MB on the other images).

The gotcha, is when you install Oracle, I think it required a 1 GB machine to 
install, it started to set memory parms.  You have to go back and adjust them 
for your size.  And then I had to change the PGA size also.

If you really need performance, then the trade is you loose easy 
administration.  RAW file access against native dasd spread across controllers. 
 Pain if you need more space.  I didn't need that type of performance so my 
tablespaces are in a LVM pool.  Need more space, add a pack to the LVM pool.  
No changes to Oracle.

Oracle assumes that you are on crap hardware.  It wants to put copies of its 
log files on separate disks.  I had a small fight with it about that.

We didn't have FCP tape drives attached, so I do two types of backups.
1.  I cycle Oracle and do a flash copy each week.  I DDR the images to escon 
3590s for disaster backups.
2.  A copy of Oracle is exported and the logs are copied to an NFS server.  The 
files on the NFS server are tar'ed and put on a 3590 tape.  (I can reload via 
tar any/all files back to the NFS server.)

The documentation, even when discussing zSeries processors, have requirements 
that are PC in nature.  i.e. cheap CPU, cheap memory, crap I/O.  However, if 
you are seriously going to drive a couple IFL engines, they are good starting 
points.

Of course, not having a good VM performance monitor is a serious gotcha if you 
are doing heavy loads or where performance is critical.  We had a developer 
issue a query from hell which tied up the server for hours, but Oracle 
continued to service the other queries in a timely fashion.  Not hard when you 
are only using 25% of the processor, normally.

You might need a startup/shutdown script to activate/deactivate lsnrctl, the 
database, and dbconsole.

That's all I can thing of for now.
We have been running for 4 years now.  No failures.  No problems.  No 
maintenance added.  Just runs.

But then, never had a disaster recovery attempt.  Never had to reload a single 
table from backup.  I have restored the database from backup but didn't need to 
roll forward.

Tom Duerbusch
THD Consulting




>>> "Davey, Clair"  2/15/2011 10:25 AM >>>
I would like to hear some of your 'gotcha's when running Oracle
databases on Linux on Zeries.  You can send them offline if you would
like.
 
The contents of this e-mail are confidential to the ordinary user of the e-mail 
address to which it was addressed and may also be privileged. If you are not 
the addressee of this e-mail you should not copy, forward, disclose or 
otherwise use it or any part of it in any form whatsoever. If you have received 
this e-mail in error, please notify us by telephone or e-mai

Re: SUSE 11 SP 1, Oracle 10g Install - Network Config Assistant Fails

2011-02-04 Thread Tom Duerbusch
FYI

I installed Oracle 10g R2 on Suse 10 with a 1GB virtual sized machine.
I did that with all 4 of my Oracle machines.

I don't think that Suse 11 would take more storage.

On my installs, there were some Oracle patches that needed to be applied.  And 
one had to be applied during the Oracle install.  This was at a point where the 
Oracle install failed, applied the patch, and restart the Oracle install at 
that point.  Did you have to do the same thing under Suse 11?



Tom Duerbusch
THD Consulting

>>> David Stuart  2/4/2011 3:14 PM >>>
Mark, 

That will take an 'outage', as I'm running in an LPAR (yeah, I know!), and I 
may have to POR after adjusting memory for that LPAR.  I don't remember, right 
now. 


Dave 






Dave Stuart
Prin. Info. Systems Support Analyst
County of Ventura, CA
805-662-6731
david.stu...@ventura.org>>> "Mark Post"  2/4/2011 1:11 PM >>>
>>> On 2/4/2011 at 04:01 PM, David Stuart  wrote: 
> It's running with 1GB. 

Just to get through the install I would try more, say 1.5 or 2G.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/ 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


OT: Fedora 14 for IBM System z 64bit official release

2011-01-26 Thread Tom Duerbusch
This leads me to a question that I'm mildly interested in.

If it took so long for Fedora to have a 64 bit favor, why would anyone use it?
Is there a different market for Fedora on the mainframe than for Redhat or Suse?
What does Fedora do that can't be done with Redhat or Suse which gets timely 
upgrades?

Back in the Suse 7/8 time frame, Redhat seemed to be skipping support for every 
other release of Oracle.  I don't recall if they skipped support for every 
release of other products as well.  That made my decision very easy.  I'm going 
with the distribution that offers consistent and timely support.

So, why would anyone use a distribution that is years behind in support?

Inquiring minds want to know...

Tom Duerbusch
THD Consulting

>>> Neale Ferguson  1/25/2011 8:27 PM >>>
You are correct. 1G allows the VNC method to be used and all the customization 
that comes with it. Tks


On 1/25/11 8:08 PM, "Karsten Hopp"  wrote:

This looks like you're doing a text installation where you have only a limited 
set of
configuration options. If you didn't have to select between text and VNC 
install methods
you'll need to assign more memory for this guest and try again, I was doing my 
test
installs with 1G.
That's only required for the installation, the memory requirements of the 
installed system
can be much smaller, depending on what you intend to use it for.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: mt_st and 3590 compression

2011-01-21 Thread Tom Duerbusch
Thanks Mark

The documentation showed that the mt_st RPM needed to be installed.  But their 
examples continued to use the mt command.  Oh well.  That is solved and 
documented.

Thanks

Tom Duerbusch
THD Consulting

>>> Mark Post  1/21/2011 12:37 PM >>>
>>> On 1/21/2011 at 01:26 PM, Tom Duerbusch  wrote: 

> I'm doing something wrongimagine that...but what?

# rpm -qlp mt_st-0.9b-97.1.50.s390x.rpm | grep bin/
/usr/bin/mtst
/usr/sbin/stinit


It seems the command is actually mtst, not mt.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


mt_st and 3590 compression

2011-01-21 Thread Tom Duerbusch
I asked this back in 2009 and the response was to install the mt_st RPM.

The problem was, how to enable 3590 tape hardware compression from Linux. Now, 
I'm on SUSE 11 SP 1 for this run.

Well, I install the mt_st RPM.
When I try to do:

linux72:/VSEVTAPE/archive # mt -f /dev/ntibm0 compression 
mt: invalid argument `compression' for `tape operation' 

The current Device Drivers, Features, and Commands on SUSE Linux Enterprise 
Server 11 SP1 manual only says to install the mt_st RPM. It doesn't say I have 
to do anything else.

I tried setting the device offline and then online. mt still failed.

linux72:/VSEVTAPE/archive # mt -V 
mt (GNU cpio 2.9) 

This result is the same as before I installed the RPM. Shouldn't the version be 
something else? 
When I go back into Yast, it does show the mp_st package is installed.

I'm doing something wrongimagine that...but what?

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Advice on setting up minidisks

2011-01-14 Thread Tom Duerbusch
At first, when Mark suggested only -9 and -27 disks, I disagreed with him.
After a day, I started agreeing with him.
Now I'm back to disagreeing with him.  
I guess it all depends

First, you have to go back to hardware.  You can only create 256 drives out of 
any RAID array.  On the DS6800/DS8100, if you are using 72 GB drives, if you 
only define mod-3s, you can't use all the space available.  And this gets worse 
as you go to the 145 GB drives, the 300 GB drives, the 500 GB drives and the 1 
TB drives.  You just have to have some/many large 3390s in order to use all the 
space.

Mark is pretty much correct, that 3 3390-3 vs 1 3390-9 are the same.  If they 
are on the same raid array there isn't much performance improvement unless you 
consider that you can only do 1 I/O per device at the same time, unless you use 
PAV (and paid for that feature).

Consider that a mod-27 will be only on a single RAID array.  Divide it into 3 
3390-9 and you can put each one on a different array.  Now you can truely do 
multiple I/Os at the same time.  

In my case, I do have some larger Linux guests, but I do have a bunch of small 
ones.  Really they could be on a 3390-1, but, eventually, some of them, due to 
software installs, tend to grow and I need more space.  Their data, however, I 
either have as LVM (yep, larger capacity drives work well here), or NFS mounted 
space.  So my root drive is usually a 3390-3.  HOME and other data directories 
are on LVM or NFS space.

For example, my Oracle servers use LVM for local Oracle tablespace.  However, 
it backs up to a NFS server.  That NFS server is an easy place for me to backup 
all the data from multiple machines to tape.  The Oracle server is physically 
backed up weekly with FlashCopy and that flashed image is then backed up to 
tape as a physical image.  This is used for disaster recovery purposes.

My point is that small drives have their purpose.  But I also would tie 
together a dozen mod-3 drives if I had the option to tie together a lessor 
amount of mod-9 drives (or higher capacity).  

If you need I/O performance, make sure you spread your data across several 
arrays.  This may require you to do smaller drives.

I can sure tell the difference when I copy a GB file when from/to are on the 
same array vs from/to being on different arrays.

BTW, I'm not sure why someone would do this, but you can create mod-10, mod-11 
mod-xx drives in the IBM Dasd Subsystems.  Any size is really available.  But 
you may face issues of "standards".

Just another opinion

Tom Duerbusch
THD Consulting

>>> Ron Foster at Baldor-IS  1/13/2011 1:41 PM >>>
Hello list,

This may have been discussed before...

Way back in deep dark ancient history, we used the Redbook to get
started with Linux under z/VM.  As a result, we carved up our storage
subsystem in to a bunch of mod3 drives.  We put a few mod 9 drives in
the mix.

We added drives to a guest in standard chunks.  That is when storage was
needed by a Linux system, we added a mod9 or mod3 to it.

When that Shark went off lease and we moved to a DS8000, we pretty well
kept the same philosophy.  Only we added a bunch more mod3 and mod 9 drives.

We are a SAP shop and any large databases reside in DB2 on z/OS.  There
are a few large file systems on 3 or 4 of our Linux systems, but for the
most part the drives attached to a Linux system go something like this.
A boot drive.  One to several mod3 drives for swapping (the appropriate
ones have vdisks).  One to several mod3 or mod9 drives for the SAP code
and local files.

We are moving our production drives.  We finally have gotten our
production Linux systems where about half or do very little swapping.

We do not have dirmaint, so we keep up with disk allocations with
dirmaint and a spreadsheet.

Now time has come to migrate to another storage system.  I was wondering
what other folks do.

1. Do they have a whole bunch of mod9 and mod3 drives that they allocate
to their guests?

2. Do they take mod27 drives (someone at SHARE warned me about taking
mod54 drives) and use mdisk to carve them up into something smaller.

Any input would be appreciated.

Thanks,
Ron Foster

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: How many virtual servers per IFL?

2010-12-06 Thread Tom Duerbusch
I will shot gun some of them...
 
1.  Disaster recover is much easier on the mainframe.  In effect, no matter 
what hardware is replaced with what hardware, it is all the same.  With PC type 
servers, the hardware, hence the software drivers are constantly changing.  
This may force you to reinstall the software instead of just restoring.
 
2.  Our I/O subsystem.  Mainframes with ficon/FCP, can drive (per IBM 
documentation) drive hundreds of thousands of I/Os per second.  If you only 
need a few hundred I/Os per second, well, that is within PC ranges.
 
3.  Licensing is a two edge sword.  Putting 5 copies of Oracle on an IFL...you 
only pay for one copy.  However, if you have many, one copy products, you end 
up needing more engines on the IFL, which (if you get charged by the engine), 
causes those product charges to increase.
 
4.  Disk is disk.  It costs the same whether your DS8100/DS6800 is configured 
for CKD or SCSI disk.
 
5.  Mainframe memory is more expensive, but it is more effectively used.  When 
an application states that it needs 4 GB to run, I start around 500 MB and 
increase it when needed.
 
6.  When a server application needs more resources, many times you have to go 
out and buy a newer, bigger server.  When a mainframe server needs more 
resources, you may have options to rob other servers.  Also for larger shops, 
Capacity on Demand.
 
7.  Green.  There is an application on z10s and above, that will show you your 
footprint and the incremental footprint for additional loads.  I seem to recall 
something about you can plug in data from servers you are migrating from, to 
show the incremental decrease in the footprint of the datacenter.  There hasn't 
been much chatter about this on the listservs so I don't know how well this has 
been received.
 
8.  Internal network speed.  If a function requires the use of several servers 
and they are network attached, things are slowed up by the network.  No such 
problem with Hypersockets or Guest Lans/VSWITCH (under VM) and can have large 
packets also.
 
9.  We don't, but we should have performance tools.  You buy one for the LPAR 
and you know what is going on.  Rather than buy one per server.  You still 
might need specialized performance tools on some servers.  Oracle OEM to 
measure internal Oracle performance, for example.
 
10.  The serious problem with PC servers is context switching.  There a dog.  
Mainframes are great at this, as CICS transactions really drive this.  If your 
load tends towards transactional instead of batch (data mining), PC type 
servers were not designed for this.  I assume that RS6000 and Sun type servers 
are pretty good at context switching, but I have no direct knowledge of this.
 
 
Back when Linux started hitting mainframes and IFLs were announced, there was 
discussions of 100 images per engine.   A lot of the servers at that time were 
routers, DNS, Samba, NFS and some web.  Now I seem to here 10-20 real workloads 
per engine.
 
BTW, there was/is an MES upgrade from one box to another.  In the case of the 
MES upgrade from a z/890 (our box) to a z10 (hopefully/maybe ours), the license 
for the IFLs transfers.  Which means that we would not have to pay for the 
Linux side again.  And the new IFLs are faster per engine than the older IFLs.  
That is no longer a cost on the mainframe that you still have on the other 
server platforms.
 
Know that I think of it, I may be thinking of the MES upgrade that pulled cards 
(and you license and CPUID) from one box and installed it on the newer box.  
I'm now thinking that the IFL engine transfer will happen with any upgrade to a 
new box.  I've been looking at the MES upgrade option for so long, that I have 
MES on the mind .
 
Tom Duerbusch
THD Consulting
 


>>> John Cousins  12/6/2010 11:07 AM >>>
Here we go again!
Without success, we've been trying to get the IT department here to adopt 
z/Linux since 2003!

Our zVM licence has been recently cancelled, and I have just had a request from 
our Enterprise Architects for some costing for z/Linux as they need to compare 
server virtualisation costs with VMware!

One problem of trying to get a cost per virtual server was always trying to 
estimate how many servers an IFL will support. We had a 13 SuSe servers defined 
in a z800 IFL but as they were hardly used we couldn't measure a thing!

So are there any rules of thumb out there on how many production virtual 
servers would run on a Z10 IFL? Obviously it will depend on server utilisation, 
guess that will need to be estimated as well?

Another question is where do the bulk of the savings come from? From my 
investigations over the years other success stories suggest most savings come 
from software licensing, e.g Oracle, Tivoli etc. but also from networking 
infra-structure by the use of virtual switches. Are there any other areas that 
provide benefits? 

Any ideas or constructive suggestions would be gratefully

Re: Memory Allocation

2010-12-03 Thread Tom Duerbusch
It really depends on how easy/hard is it, in your shop to change the size of 
the LPAR.
 
When it is easy, then I make the memory allocation fairly small, say 2 GB and 
grow it when it is proved that I need more memory.  However, I wouldn't ever 
try Websphere with anything under 4 GB.  This is under the theory that the 
first application takes all the memory it can and will leave nothing for the 
next application.
 
I have a 4 GB LPAR for z/VM 5.2 and Linux machines.
What I currently have running:
 
I have 18 Linux images running
 
1  production Oracle
1  test Oracle
1  development Oracle
1  DB2 Connect production
1  DB2 Connect test
4  Samba servers   (SLES 8)
2  NFS servers
2  FTP servers (that have the iso images of Linux and Oracle mounted for 
installation purposes)
3  VSE Virtual Tape servers
2  Samba Servers (SLES 11) that will be migrating to once testing is completed  
(these will also take over the VSE LANRES disk hosting function we are 
currently using)
 
None of these are heavy hitters.
Production Oracle has 200 users defined and 40 currently logged on.
DB2 Connect production has 22 clients at this time.
 
The VM system only pages when an image is bought up.  
All the linux images use vdisk for swap space.
 
The production DB2 Connect image:
Defined as 250 MB virtual.
Defined on a single 3390-9.
Has been running for 162 days;
linux69:~ # swapon -s
FilenameTypeSizeUsedPriority
/dev/dasdb1 partition   162468  271250
/dev/dasdc1 partition   81228   45224   60
Most of the swap space used was from me going into Yast.
 
And as VMSTAT shows, we are not actively swapping:
 
linux69:~ # vmstat 10 10
procs ---memory-- ---swap-- -io -system-- -cpu--
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa st
 1  0  47936   7680  22460 12444000 21510  4  1 94  0  1
 0  0  47936   7560  22468 12443200 014 2631  165  4  1 70  0 25
 0  0  47936   7560  22476 12442400 0 3 3130  179  4  1 80  0 15
 0  0  47936   8092  22432 12446800 018  351  213  4  1 95  0  0
 0  0  47936   8032  22448 12445200 014  328  203  3  1 95  0  0
 0  0  47936   8032  22456 1200 012 5486  188  0  1 99  0  0
 0  0  47936   8032  22472 12442800 026 6104  189  4  1 89  0  6
 0  0  47936   8032  22496 12440400 014 6267  189  4  1 88  0  7
 0  0  47936   7972  22520 12438000 014 4529  194  4  1 86  0  9
 0  0  47936   7972  22536 12436400 012  996  202  4  1 95  0  1

So, your initial plans for a z/VM plus a single DB2 Connect Server could be run 
in a LPAR of 512 MB.
However, since SUSE (with OSA connections) seems to want about 750 MB for 
installation, that would cause a lot of paging when you do a Linux install.  On 
a MP3000 with 1 GB and 9 VSE systems the 750 install requirement did cause a 
lot of paging but everything still ran fairly well.  
 
My current processor is a z/890 with a ficon attached DS6800.  So, yes, I do 
trade the very cheap ficon I/Os for the more expensive memory.  If we were 
driving the DS6800 harder, then I would add more real memory to the mix to 
drive down the I/O rate (paging/swaping reduction along with more memory for 
caching).
 
So the question goes back toWhat else are you planning (or playing) for in 
the future and how often can you reconfigure the LPAR to add more memory?
 
Tom Duerbusch
THD Consulting
 

>>> "Dazzo, Matt"  12/2/2010 8:45 AM >>>
We are entering the world of zvm/linux with a z10bc-2098 n04 and 16gb total 
memory. I am trying to decided what to allocate to zvm/linux as a starting 
point. Our initial thoughts and a recommendation from our VAR was 6gb. The 
first application will be DB2 Conn Server and not sure what's after that. I'd 
like to find out how much memory other shops have allocated and what 
applications they support. Is our initial 6gb a decent starting point? I know 
the IBM standard answer 'it depends' but I am looking for some clarity and 
guide lines.

Thanks
Matt

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
---

Re: Reducing Linux virtual machine size

2010-07-23 Thread Tom Duerbusch
Hi Ray.
 
I tried your script on some of my images.  Works fine, except when Oracle is 
involved.
 
linux62:~ # ps -eo pmem | awk '{pmem += $1};END {print "pmem ="pmem"%"}';
pmem =1347.1%
I do have a lot of swap blocks allocated.  This is due to a batch type run, 
that is run off hours.  During the day, when users are on, we swap very little.
 
So if this does include swap pages, I don't think the script would give me what 
I need, during normal processing.  Do you agree?  Or am off track here?
 
Thanks
 
Tom Duerbusch
THD Consulting

>>> "Mrohs, Ray"  7/23/2010 1:45 PM >>>
Start up all your Linux procs and then run this little script.

#! /bin/sh
ps -eo pmem | awk '{pmem += $1}; END {print "pmem = "pmem"%"}';

It will give you a ballpark percentage of current memory utilization.
I tuned some Apache/ftp servers down to 100M with no ill effects.

Ray Mrohs
U.S. Department of Justice
202-307-6896



  

> -Original Message-
> From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On 
> Behalf Of Bruce Furber
> Sent: Friday, July 23, 2010 11:01 AM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Reducing Linux virtual machine size
> 
> Can someone recommend a how to procedure for  monitoring a 
> zLinux machine to determine how much to reduce a machines 
> virtual memory? 
> 
> Getting permission to schedule time to log a machines off is 
> very difficult so I have to be confident I have it right.  
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO 
> LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
> 
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Strange problems adding network adapter no 2 (eth1) in SLES11 SP1

2010-07-15 Thread Tom Duerbusch
When the thoughts went to a firewall problem, I don't recall that YaSTfirewall2 
is enabled by default in SLES 11 SP1 was discussed.  I think only networking 
firewalls have been discussed.

That caught me yesterday.

Try
YaSTfirewall2 status
to see if it is running.

YaSTfirewall2 stop
to stop it.

Then test again.


Tom Duerbusch
THD Consulting

Sent via BlackBerry by AT&T

-Original Message-
From: Alan Altmark 
Sender: Linux on 390 Port 
Date: Thu, 15 Jul 2010 08:34:38 
To: 
Reply-To: Linux on 390 Port 
Subject: Re: Strange problems adding network adapter no 2 (eth1) in SLES11 SP1

On Thursday, 07/15/2010 at 04:18 EDT, Agblad Tore 
wrote:
> I checked 'routes', only one row, and now I also have moved the SLES11
SP1
> machine into the same
> subnet where the SLES10 SP2 is ( that works fine with three NICs, all
possible
> to login via)
> And no change.

On a VSWITCH there is no point in having more than one virtual NIC on the
same subnet (LAN segment).  All you're doing is creating more work for
Linux.  Get rid of eth1 and eth2.

I mean, it's not like you can have an isolated vNIC failure or
accidentally unplug it!

Alan Altmark
z/VM Development
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES 11 SP 1 Bogus swap disk

2010-07-09 Thread Tom Duerbusch
I would agree that it wasn't a big deal to change, once I figured out what was 
happening.
 
I would like to suggest, one of those great "notes" in the Deployment guide.  
They are real eye catchers when you print out the manual.  For me, just another 
thing I skip over when I view online (I'm showing my age now).
 
Perhaps page 109 under 6.13.1 Partitioning (Overview).
Something to let z/Linux users know about this change and how to bypass it.
 
I assume that this also occurs with LPAR installs.  
 
With LPAR installs, we don't get to change the storage size, quickly.  In my 
experience, the LPAR is created with the resources that it would need in 
production.
 
Sooo
 
Would a 2 GB LPAR, create a disk based swap file of 1 GB off of my (in this 
case) 3390-3 disk?  Not much space left to install Linux..
 
The novice installer that you are trying to help, then either needs to:
 
1.  Get a bigger disk.
2.  Get another disk and move some mount points to that disk.
3.  Trim the number of packages or patterns to reduce the dasd requirements.
 
That would sure give the novice installer some experience!  
 
FYI
Once I recovered all my disk and tried the default set of packages, I was still 
some 60 MB short.  I deleted several patterns, only keeping:
 

Base System
32 bit runtime
Help and Support
Minimal System
(yast did inform me that some packages were needed and would be installed 
either way).
 
I have room so I can install other packages when necessary.  i.e.  when I find 
out that something I wanted, wasn't installed.  (ooops, my bad)
 
It may be a bad habit of mine, as I try to do the initial install on a single 
pack as other packs, defined later, will just be part of a large LVM.  
 
The following is a snip of my installation instructions concerning fixing the 
bogus swap disk:
 
select dasda1 (swap)
|   delete
|   Yes (really delete)
|   select dasda1 (ext3)
|   delete
|   Yes (really delete)
|   select dasda  (note this is not dasda1)
|   add
|   New Partition Size
|   Max
|   next
Format Partition
File System: Ext3
Mount partition
Mount point:  /
Finish
Select dasdb1
edit
Format Partition
File System: Swap
Mount partition
Mount Point: Swap
Fstab Options:
Device Path
Swap Priority:
50
OK
Finish
Select dasdc1
edit
Format Partition
File System: Swap
Mount partition
Mount Point: Swap
Fstab Options:
Device Path
Swap Priority:
60  
  
OK
Finish
        Accept
 
Tom Duerbusch
THD Consulting

 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


SLES 11 SP 1 Bogus swap disk

2010-07-09 Thread Tom Duerbusch
I'm installing SLES 11 SP 1 under z/VM 5.2 on an IBM z/890.
 
During the installation process, a swap disk is created on my dasda
drive automatically by the install process.  I have my own swap disks
(vdisks) allocated and I don't want/need this swap disk.
 
I can go into partitioning during the install and delete this swap disk
(created as dasda1 with my linux system on dasda2).  That does get rid
of the swap disk and the linux system would then be moved to dasda1. 
OK.
 
However, and here is the complaint.  I still lost the disk space on
dasda that was allocated to the bogus swap disk on dasda1.  In other
words the disk that I will install on, which is now dasda1, starts on
cylinder 307.
 
And the starting location isn't constant.  If you install under
different virtual storage sizes, the swap disk allocated is 1/2 of the
virtual storage size.  
 
Note:
 
with def stor 512m
│Partitioning  
 ┬│
  ││   
││
  ││ *  Create swap partition /dev/dasda1 (247.50 MB)  
││
  ││ *  Create root partition /dev/dasda2 (2.05 GB) with ext3  
││
  ││ *  Use /dev/dasdb1 as swap
┴│
  ││ *  Use /dev/dasdc1 as swap
││
  ││
 
with def stor 768m
 
Partitioning   
┬│
  ││   
││
  ││ *  Create swap partition /dev/dasda1 (372.66 MB)  
││
  ││ *  Create root partition /dev/dasda2 (1.93 GB) with ext3  
││
  ││ *  Use /dev/dasdb1 as swap
┴│
  ││ *  Use /dev/dasdc1 as swap  
 
To get my disk space back, I have to delete all the partitions for
dasda and add in the dasda1 partition.
 
So, I have a way around it.
But it looks like some PC type default snuck in to create this swap
disk.
 
I have my workaround, but it shouldn't be needed in the first place
.
 
Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SUSE 11 SP 1...where?

2010-05-24 Thread Tom Duerbusch
Thanks

Time to stop banging my head into the wall

Tom Duerbusch

>>> Mike Friesenegger  5/24/2010 4:18 PM >>>
Hello Tom,

SLES11 SP1 will be publicly available on June 2nd - 
http://www.novell.com/news/press/novell-announces-suse-linux-enterprise-11-service-pack-1/
 

Mike 

>>> On 5/24/2010 at 03:03 PM, in message
<4bfaa33f028000004...@mail.stlouiscity.com>, Tom Duerbusch
 wrote: 
> Dumb question...
> 
> Where is SUSE 11 SP 1?
> 
> When I go to the Novell download site, I see SUSE 11.
> http://download.novell.com/index.jsp?product_id=&search=Search&families=2658&ve
>  
> rsion=8162&date_range=&date_start=24+May+2010&date_end=24+May+2010&keywords=&sort_
> by=&results_per_page=&x=35&y=5
> 
> When I go to the patch database, I see patches for SUSE 11 but not a set of 
> patches for SUSE 11 SP 1.
> http://download.novell.com/patch/psdb/ 
> 
> Perhaps, SP 1 is only available thru YaST automatic patch update?
> 
> I don't have SUSE 11 installed.  So I was planning on either installing SUSE 
> 11 SP 1, or SUSE 11 with the SP 1 iso file mounted.
> 
> Thanks
> 
> Tom Duerbusch
> THD Consulting
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


SUSE 11 SP 1...where?

2010-05-24 Thread Tom Duerbusch
Dumb question...

Where is SUSE 11 SP 1?

When I go to the Novell download site, I see SUSE 11.
http://download.novell.com/index.jsp?product_id=&search=Search&families=2658&version=8162&date_range=&date_start=24+May+2010&date_end=24+May+2010&keywords=&sort_by=&results_per_page=&x=35&y=5

When I go to the patch database, I see patches for SUSE 11 but not a set of 
patches for SUSE 11 SP 1.
http://download.novell.com/patch/psdb/

Perhaps, SP 1 is only available thru YaST automatic patch update?

I don't have SUSE 11 installed.  So I was planning on either installing SUSE 11 
SP 1, or SUSE 11 with the SP 1 iso file mounted.

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Announcing Red Hat Enterprise Linux 6 beta

2010-04-26 Thread Tom Duerbusch
Where I did initially like IUCV and VCTCA, both methods are Point to Point on 
an IP network. (it seemed simplier)  It violates all the standard networking 
laws.

Every time I showed our IP setup to the network people, they were just confused 
and I didn't know why.
But in z/VM 5.2, support was dropped for IUCV and VCTCA on an IP network.  When 
I migrated over to z/VM 5.2, I had lots of VSE and Linux images that had to be 
converted.  But first, I had to learn real IP networking (subnets, broadcast 
address, router address, etc).

I also converted over to VSWITCH which made life soo much easier.

Tom Duerbusch
THD Consulting

>>> Mark Post  4/26/2010 3:54 PM >>>
>>> On 4/26/2010 at 04:46 PM, Xose Vazquez Perez  
>>> wrote: 
> On 04/26/2010 10:32 PM, Mark Post wrote:
> 
>>>>> On 4/26/2010 at 04:19 PM, Michael MacIsaac  wrote: 
>>> Ouch!  We use IUCV all the time.  Is there any reason for this?
>> 
>> For the installation process?
> 
> SuSE is not too far: 
> http://en.opensuse.org/Linuxrc#Special_parameters_for_S.2F390_and_zSeries 

Yes.  So?  Who is using this stuff for installation these days (except perhaps 
on Hercules)?  If someone is, on real System z hardware, they're being overly 
masochistic to say the least.  Don't do that.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SLES10 - Oracle/Memory Issues (oom-killer)

2009-12-30 Thread Tom Duerbusch
My first guess, is that you are out of swap space.
Now why, is a different story.

So, either increase the guest machine size for Oracle (do a "q v stor" first to 
see if what you think it should be, is actually what it is ), or add more 
swap space.

I think there are some conditions where insufficient virtual memory can also 
lead to OOM.  Something that needs lots of memory, and can't tolerate it being 
swapped.  In that case, increase the guest machine size.

Sometimes, I've cause that type of problem, by editing a large log file.  
Editors like "joe", puts everything in memory.  Got a 500 MB log file?  No 
problemuntil you run out of swap space.

Tom Duerbusch
THD Consulting
Also running with Oracle 10g R2 (4 copies) 

>>> "Rodery, Floyd A Mr CIV US DISA CDB12"  
>>> 12/30/2009 4:48 PM >>>
We've noticed some repetitive error messages in regards to a memory
issue on one of our SLES 10 SP2.  We notice the error below every
morning around the same time, with multiple oom-kills of oracle, perl,
etc.  Anyone have any thoughts as to why this might be happening with
the amount of memory this guest has?  If you have any thoughts or need
anymore information, I would certainly appreciate it.

#MEMINFO (For a reference)

MemTotal:  5138052 kB
MemFree:144308 kB
Buffers:121768 kB
Cached:3717360 kB
SwapCached:  46352 kB
Active:2849880 kB
Inactive:  1590368 kB
HighTotal:   0 kB
HighFree:0 kB
LowTotal:  5138052 kB
LowFree:144308 kB
SwapTotal: 2352736 kB
SwapFree:  2116048 kB
Dirty: 376 kB
Writeback:   0 kB
AnonPages:  584892 kB
Mapped:2159592 kB
Slab:   172056 kB
CommitLimit:   4921760 kB
Committed_AS:  5506584 kB
PageTables: 334304 kB
VmallocTotal: 4289716224 kB
VmallocUsed:  5024 kB
VmallocChunk: 4289711032 kB
HugePages_Total: 0
HugePages_Free:  0
HugePages_Rsvd:  0
Hugepagesize: 2048 kB


#ERROR MESSAGE  (excerpt from /var/log/warn, this error is repeated over
and over, killing several different processes within a couple minutes)

Dec 27 04:37:33 *SERVER NAME* kernel: oracle invoked oom-killer:
gfp_mask=0x201d2, order=0, oomkilladj=0
Dec 27 04:37:34 *SERVER NAME*  kernel: ccd13918 c54c2cf1
8142d938 0061a350 
Dec 27 04:37:34 *SERVER NAME*  kernel:61104620
  00105b52 
Dec 27 04:37:34 *SERVER NAME*  kernel:
 006009c0 0418 
Dec 27 04:37:34 *SERVER NAME*  kernel:0001
0008 000e ccd139c0 
Dec 27 04:37:34 *SERVER NAME*  kernel:004ad158
00105b52 ccd13948 ccd13988 
Dec 27 04:37:34 *SERVER NAME*  kernel: Call Trace:
Dec 27 04:37:34 *SERVER NAME*  kernel: ([<00105b6e>]
dump_stack+0x2aa/0x364)
Dec 27 04:37:34 *SERVER NAME*  kernel:  [<001c3994>]
out_of_memory+0x3b8/0x934
Dec 27 04:37:34 *SERVER NAME*  kernel:  [<001c7216>]
__alloc_pages+0x29a/0x370
Dec 27 04:37:34 *SERVER NAME*  kernel:  [<001ce582>]
do_page_cache_readahead+0x156/0x3a8
Dec 27 04:37:34 *SERVER NAME*  kernel:  [<001bd55c>]
filemap_nopage+0x210/0xa4c
Dec 27 04:37:34 *SERVER NAME*  kernel:  [<001e18be>]
__handle_mm_fault+0x282/0x114c
Dec 27 04:37:34 *SERVER NAME*  kernel:  [<00102080>]
do_dat_exception+0x584/0x858
Dec 27 04:37:34 *SERVER NAME*  kernel:  [<00115d16>]
sysc_return+0x0/0x10
Dec 27 04:37:34 *SERVER NAME*  kernel:  [<8061041c>] 0x8061041c
Dec 27 04:37:34 *SERVER NAME*  kernel: 
Dec 27 04:37:34 *SERVER NAME*  kernel: Mem-info:
Dec 27 04:37:34 *SERVER NAME*  kernel: DMA per-cpu:
Dec 27 04:37:34 *SERVER NAME*  kernel: CPU0: Hot: hi:  186, btch:
31 usd: 159   Cold: hi:   62, btch:  15 usd:  58
Dec 27 04:37:34 *SERVER NAME*  kernel: CPU1: Hot: hi:  186, btch:
31 usd: 156   Cold: hi:   62, btch:  15 usd:  27
Dec 27 04:37:34 *SERVER NAME*  kernel: Normal per-cpu:
Dec 27 04:37:34 *SERVER NAME*  kernel: CPU0: Hot: hi:  186, btch:
31 usd: 151   Cold: hi:   62, btch:  15 usd:  56
Dec 27 04:37:34 *SERVER NAME*  kernel: CPU1: Hot: hi:  186, btch:
31 usd: 165   Cold: hi:   62, btch:  15 usd:  14
Dec 27 04:37:34 *SERVER NAME*  kernel: Free pages:   21488kB (0kB
HighMem)
Dec 27 04:37:34 *SERVER NAME*  kernel: Active:832717 inactive:260329
dirty:0 writeback:0 unstable:0 free:5372 slab:14827 mapped:650
pagetables:14672
2
Dec 27 04:37:34 *SERVER NAME*  kernel: DMA free:16016kB min:3660kB
low:4572kB high:5488kB active:807232kB inactive:896400kB
present:2097152kB pages_
scanned:3937815 all_unreclaimable? yes
Dec 27 04:37:34 *SERVER NAME*  kernel: lowmem_reserve[]: 0 0 3072 3072
Dec 27 04:37:34 *SERVER NAME*  kernel: Normal free:5472kB min:5492kB
low:6864kB high:8236kB a

Re: Getting Started with zLinux

2009-11-30 Thread Tom Duerbusch
Since you are a newbie

I don't think anyone mentioned this but there are IBM Redbooks that walk you 
thru the installation process, in mainframe language

I use the SUSE flavor and the manuals helped a lot.

I assume that there  are Redhat Redbooks available.  Others hopefully will 
chime in on those.

The trail you are on, has already been blazeded.  There are maps (Redbooks) and 
becoming a well traveled highway.  In other words, if you think you are blazing 
a trail, it is time to step back and think about it.  The Redbooks pretty much 
map the common things out.

That includes the major interface (Something like Putty), into Linux.
When we need to use the VNC Server...
But bringing up the KDE desktopWell, if you want to play solitaire 


Tom Duerbusch
THD Consulting


>>> John McKown  11/30/2009 5:36 AM >>>
That really helped me! And it also helped with z/OS UNIX as well.

On Sun, 2009-11-29 at 20:14 -0500, Mike Myers wrote:
> Scott:
>
> I suppose you're right. I am a Linux newbie and am trying to accelerate
> my learning experience. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Has anyone looked into a "console server"

2009-09-30 Thread Tom Duerbusch
Yep, I use the PROP method also.

I have PROP log to a SFS directory, so I don't have to reaccess the minidisk 
when viewing the file.

Then I have an exec "HCF" which browses the console file (without locking it), 
also with a xedit profile to specify the columns I wish to view (on my MOD 5 
session).

And then various xedit macros to view all messages from a machine, or refresh 
the screen every 5 seconds, etc.

It use to be tied into PROP issuing commands back to the Linux guest.  Things 
such as shutting down an application and the OS and forcing it off, but since 
SIGNAL seems to work fine, that code hasn't been maintained.

A program product for VM console management is much better, at a cost.  But I 
get by with a home grown ugly piece of code.

Tom Duerbusch
THD Consulting

>>> Michael MacIsaac  9/30/2009 9:05 AM >>>
Hello list,

Has anyone ever tried to log the 3270 Linux consoles to a central server?
I see on http://freshmeat.net/projects/conserver/ 
  "Conserver provides remote access to serial port consoles and logs all
data to a central host."

It would be nice to rarely need 3270 sessions, but also to be able to get
to any Linux's console logs and search for specific error messages.

I'm hoping someone has already blazed this trail. Thanks.

"Mike MacIsaac"(845) 433-7061

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Moving a Samba directory

2009-09-11 Thread Tom Duerbusch
Thanks Aria

SMBSTATUS is a good thing to know.
I'm not worried about syncing files using RSYNC as the only time this Samba 
server is updated is during 1st shiftwell, so far.

I will be doing the expansion during the 3td shift.

Tom Duerbusch
THD Consulting

>>> Aria Bamdad  9/11/2009 8:12 AM >>>
When you cycle the samba server, all connection to the shares will be broken
but the clients reconnect when they see this happen.   At least for windows
clients, this should be transparent unless someone tries to access your
server in the exact time you are restarting it.

Before you restart your server, you can issue a SMBSTATUS command to see if
there are any open files or not.  You will see client connections to shares
but you want to watch out for open files.  If you recycle the server while
files are open, those application with open files will get upset.

You can minimize your samba outage to seconds by using the RSYNC command I
pointed out in an earlier post just before you shutdown your samba server
and immediately after you shutdown the server and before you start it again
in its new  home.  Depending on the size of your file system, this could be
just seconds.  I am not sure how much shorter you can make this outage.

Aria

> -Original Message-
> From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of
> Tom Duerbusch
> Sent: Wednesday, September 09, 2009 4:39 PM
> To: LINUX-390@VM.MARIST.EDU 
> Subject: Re: Moving a Samba directory
>
> Like I said, I knew the answer to this one (i.e. shutdown Samba), but I
> hoping for a quick and dirty way, of doing the conversion with Samba
> still up.
>
> Well, once this is done, I can then add disks to the LVM on the fly.
> Just hate to have to do the scheduling of downtime (users and servers)
> when I know no one is going to be active anyway, just their shares are
> in use.
>
> Right?  When I cycle the server, the users have to reaccess their
> shares?  It's the servers that are accessing the Samba shares, then
> also have to be cycledand those users have to be notified...
>
> Thanks
>
> Tom Duerbusch
> THD Consulting
>
> >>> Justin Payne  9/9/2009 2:41 PM >>>
> If you do not stop the samba service you run a risk of files being
> accessed during the move, and this could lead to corruption of the new
> files. As recommended before, you should be able to copy the bulk of
> the
> data without shutting down the service. With the bulk of the data
> mirrored on the new LVM, the second rsync will be quite quick at
> syncing
> only the changes so downtime will be minimal.
>
> It would be best to plan a brief outage of the samba service to
> complete
> the task you have outlined.
>
> Justin Payne
>
> On 09/09/2009 12:53 PM, Tom Duerbusch wrote:
> > I think I know the answer to this one, but then, I don't know how
> much I don't know.
> >
> > I have a Samba server that runs 24X7.  It is rarely used at night,
> but still has Windows shares active.
> > The /home directory is located off the / directory (dasda1).  It
> needs more space.
> > I've created a LVM with a directory name of /home2.
> > I plan on copying the /home directory to /home2, rename /home to
> /homeold, and rename /home2 to /home.
> > Simple.
> >
> > What is Samba going to think about this?
> > Do I need to cycle Samba, and have all the currently connected users,
> reconnect?
> > Or as long as Samba isn't trying to access a file during this period
> of time, would it care?
> >
> > Part of this is trying to decide how much notification I have to give
> the end users, and there are a couple "servers" that also have Samba
> shares.  I don't know how to reconnect them, other than cycling those
> servers, which, then requires additional notification.
> >
> > On my test system, I moved the Samba /home directory to a LVM setup.
> No problem.  But I didn't have any currently accessed shares at that
> time (poor test plan).
> >
> > Thanks for any input and direction.
> >
> > Tom Duerbusch
> > THD Consulting
> >
> > -
> -
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390
> or visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390 
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390
> or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
>
&

Re: Moving a Samba directory

2009-09-11 Thread Tom Duerbusch
Thanks Mark

I can slip this thru and cycle the Samba server, during a known time period 
where no one is actively accessing Samba files.  

Thanks

Tom Duerbusch
THD Consulting

>>> Mark Post  9/9/2009 6:14 PM >>>
>>> On 9/9/2009 at  4:38 PM, Tom Duerbusch  wrote: 
-snip-
> Right?  When I cycle the server, the users have to reaccess their shares?  


I doubt it.  The Windows clients won't know that Samba has been cycled, so they 
will just try to re-establish the connection the next time the user tries to 
use it.  It will likely be fairly transparent to them.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Moving a Samba directory

2009-09-09 Thread Tom Duerbusch
Like I said, I knew the answer to this one (i.e. shutdown Samba), but I hoping 
for a quick and dirty way, of doing the conversion with Samba still up.

Well, once this is done, I can then add disks to the LVM on the fly.
Just hate to have to do the scheduling of downtime (users and servers) when I 
know no one is going to be active anyway, just their shares are in use.

Right?  When I cycle the server, the users have to reaccess their shares?  It's 
the servers that are accessing the Samba shares, then also have to be 
cycledand those users have to be notified...

Thanks

Tom Duerbusch
THD Consulting

>>> Justin Payne  9/9/2009 2:41 PM >>>
If you do not stop the samba service you run a risk of files being
accessed during the move, and this could lead to corruption of the new
files. As recommended before, you should be able to copy the bulk of the
data without shutting down the service. With the bulk of the data
mirrored on the new LVM, the second rsync will be quite quick at syncing
only the changes so downtime will be minimal.

It would be best to plan a brief outage of the samba service to complete
the task you have outlined.

Justin Payne

On 09/09/2009 12:53 PM, Tom Duerbusch wrote:
> I think I know the answer to this one, but then, I don't know how much I 
> don't know.
>
> I have a Samba server that runs 24X7.  It is rarely used at night, but still 
> has Windows shares active.
> The /home directory is located off the / directory (dasda1).  It needs more 
> space.
> I've created a LVM with a directory name of /home2.
> I plan on copying the /home directory to /home2, rename /home to /homeold, 
> and rename /home2 to /home.
> Simple.
>
> What is Samba going to think about this?
> Do I need to cycle Samba, and have all the currently connected users, 
> reconnect?
> Or as long as Samba isn't trying to access a file during this period of time, 
> would it care?
>
> Part of this is trying to decide how much notification I have to give the end 
> users, and there are a couple "servers" that also have Samba shares.  I don't 
> know how to reconnect them, other than cycling those servers, which, then 
> requires additional notification.
>
> On my test system, I moved the Samba /home directory to a LVM setup.  No 
> problem.  But I didn't have any currently accessed shares at that time (poor 
> test plan).
>
> Thanks for any input and direction.
>
> Tom Duerbusch
> THD Consulting
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Moving a Samba directory

2009-09-09 Thread Tom Duerbusch
I think I know the answer to this one, but then, I don't know how much I don't 
know .

I have a Samba server that runs 24X7.  It is rarely used at night, but still 
has Windows shares active.
The /home directory is located off the / directory (dasda1).  It needs more 
space.
I've created a LVM with a directory name of /home2.
I plan on copying the /home directory to /home2, rename /home to /homeold, and 
rename /home2 to /home.
Simple.

What is Samba going to think about this?
Do I need to cycle Samba, and have all the currently connected users, reconnect?
Or as long as Samba isn't trying to access a file during this period of time, 
would it care?

Part of this is trying to decide how much notification I have to give the end 
users, and there are a couple "servers" that also have Samba shares.  I don't 
know how to reconnect them, other than cycling those servers, which, then 
requires additional notification.  

On my test system, I moved the Samba /home directory to a LVM setup.  No 
problem.  But I didn't have any currently accessed shares at that time (poor 
test plan).

Thanks for any input and direction.

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Unable to logon to Linux under VM

2009-06-23 Thread Tom Duerbusch
You have Oracle running.

Look in the recent archives for "Module is unknown when signing on".

Tom Duerbusch
THD Consulting

>>> Vincent Getgood  6/23/2009 9:03 AM >>>
Hi all,

I'm having problems logging on to my SLES 10 system.

 

I originally had this system running as a LPAR some months ago, before
installing VM.

I have now defined a VM Guest (LINUX00), attached the DASD I had for the
LPAR to the guest, and successfully IPL'd SLES 10.

 

On the Linux guest console, I see: -

 

Welcome to SUSE Linux Enterprise Server 10 SP2 (s390x) - Kernel
2.6.16.60-0.21-default (ttyS0). 

SUZE-Linux-on-Z login:

 

When I try to logon, (as any defined user) I get the following: -

 

Last login: date time on ttyS0

 

Module is unknown


 

Then get thrown back to the logon prompt.

 

Any idea what is happening here?

Vince Getgood

 


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: shutdown/reboot question

2009-06-17 Thread Tom Duerbusch
If you don't have some sort of automation package or console package.

Create a CMS user with a profile exec that does:

signal shutdown guestname
sleep 10 minutes
force guestname immed
sleep 10 secs
xautolog guestname

And give him the autolog rights to that machine.

He doesn't need to know the password for this machine (which has CP 
authorization to do lots of bad things...)

If you are not on VM, there may be times when the zLinux system crashes, that 
you have to have Operations boot the LPAR.

Tom Duerbusch
THD Consulting

>>> Sue Sivets  6/17/2009 2:31 PM >>>
Is there any way to give a user the ability to shutdown and reboot a
Suse z/linux system without giving him the root password?  In simple
terms, one of my users is asking for the root password so he can reboot
the machine when he needs to because the project he's working on has
crashed and generally caused a lot of problems. I found a couple of
notes from 2007 that were about a shutdown userid on a Redhat system.
Will something like that work on a Suse10 system, or is there a better
way of accomplishing what I need? If it will work, can someone tell me
what I need to do in order to make it work?

I was thinking about using SU, but as far as I know, he would still need
the root password, and then I'm back where I started. Is there a way to
give him some kind of alternate root password  that doesn't open up the
whole  ball of wax?

If anyone has any other ideas, I would really like to hear them. I
really don't want to give out the root password if I can avoid it.

Thank you

Sue

--
 Suzanne Sivets
 Systems Programmer
 Innovation Data Processing
 275 Paterson Ave
 Little Falls, NJ 07424-1658
 973-890-7300
 Fax 973-890-7147
 ssiv...@fdrinnovation.com 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: LCS problem Installing SLES 11 in "partition" ( more)

2009-06-17 Thread Tom Duerbusch
How big is the guest machine size?

On SLES10, I need 768 MB to load the ram disk with an OSA connection.  OSA, 
from what I remember, needs 32 MB, just for buffers.  I don't recall if that 
was for each connection or for each of the tripplet addresses.  But it was a 
lot more than I had when I moved from VCTCA to VSWITCH.

Tom Duerbusch
THD Consulting

>>> Mike At HammockTree  6/17/2009 1:17 PM >>>
Since I dndn't get much response form  my initial query yesterday, I thought
I'd include a little more  information this time.  I'll paste in the log
that I captured when trying to install SLES 11 into a partition.  I tried
both OSA2/LCS and CTCI.  Note that the install program found both devices
correctly at 0400 and 0600, but when I actually tried to use them, I got
errors.  I tried the OSA2/LCS first, then the CTCI.
Any suggestions?
Thanks,
Mike

==  log begins >
Start Installation

1) Start Installation or Update
2) Boot Installed System
3) Start Rescue System

>
1

Choose the source medium.

1) DVD / CD-ROM
2) Network
3) Hard Disk

>
2

Choose the network protocol.

1) FTP
2) HTTP
3) NFS
4) SMB / CIFS (Windows Share)
5) TFTP

>
3
Detecting and loading network drivers
ccwgroup: disagrees about version of symbol struct_module

Choose the network device.

1) IBM parallel CTC Adapter (0.0.0600)
2) IBM OSA2 Adapter (0.0.0400)
>
2   Try OSA2 / LCS  

Please choose the physical medium.

1) Ethernet
2) Token Ring

>
1
ccwgroup: disagrees about version of symbol struct_module

*** failed to load lcs module

*** No network device found.
Load a network module first.

*** No repository found.

Choose the network protocol.

1) FTP
2) HTTP
3) NFS
4) SMB / CIFS (Windows Share)
5) TFTP

3

Choose the network device.

1) IBM parallel CTC Adapter (0.0.0600)
2) IBM OSA2 Adapter (0.0.0400)

>
2   *  Try CTCI **

ccwgroup: disagrees about version of symbol struct_module

*** failed to load ctcm module

*** No network device found.
Load a network module first.

*** No repository found.

Choose the network protocol.

1) FTP
2) HTTP
3) NFS
4) SMB / CIFS (Windows Share)
5) TFTP

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Oracle Enterprise Manager

2009-06-17 Thread Tom Duerbusch
OEM does a lot of stuff, including recording information in tables, for 
historical purposes.  I've seen these "idle" systems use lots of CPU.  
Especially, when they do their cleanup and automatic tuning.

If you want to see the difference, go input putty with userid Oracle (or any 
authorized user for Oracle) and...

emctl stop dbconsole
this will shutdown EOM

Monitor the guest with your favorite performance product.

emctl start dbconsole
will start up OEM again.

On lightly used systems, OEM is the major user of system resources.  If this is 
a real concern in your shop, perhaps shutting it down on test systems or when 
testing isn't being done, may be right for you.  A lot of OEM code is JAVA.

Tom Duerbusch
THD Consulting
(running with 4 Oracle 10g R2 images)

>>> "Kern, Thomas"  6/17/2009 12:25 PM >>>
Our DBA has just installed the Oracle Enterprise Manager 10.2.0.5 on a
test server (SLES9 SP3) on our z/VM 5.3 machine (z890 IFL). This morning
I noticed that from midnight to 11:00 that test server used about 6
times the CPU seconds as an idle production server. PerfTK shows that
SVM using about 3.5% over an hour interval while idle Oracle SVMs use
less than 1% CPU. OEM reports that the agent in the server uses at most
1.5% CPU. It does cause the database to perform other performance
analysis tasks. 
 
Is it reasonable for a Management agent to use 1.5% (or more) of the
system CPU in each server?
 
Has anyone in the VM/Linux community validated/invalidated the CPU
utilization reports from this product?

--
Thomas Kern
301-903-2211 (Office)
301-905-6427 (Mobile) 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: DB2 Connect and Linux on Z

2009-06-10 Thread Tom Duerbusch
I don't think UDB uses zIIP engines.  They use IFL engines.  Performance seems 
to be good, especially over hipersockets.

Tom Duerbusch
--Original Message--
From: David Boyes
Sender: Linux on 390 Port
To: LINUX-390@VM.MARIST.EDU
ReplyTo: Linux on 390 Port
Subject: Re: DB2 Connect and Linux on Z
Sent: Jun 10, 2009 12:25 PM

> We are new to the Linux world and have been developing Linux servers on
> our Z9. We are thinking of setting up a DB2 Connect Linux on a Z server
> with Hypersocket connection to out DB2 database which is on one of our
> zOS lpars.  Does anyone know if the performance is going to be
> compatible to the DB2 Connect running on a solaris server where the
> connection is Ziip eligible.  I have been told that if the DB2 connect
> runs through Hypersockets it is no longer Ziip elegible.  Forgive me if
> I seen to not be making sense.

I'm not sure I understand the question. ZIIPs serve only workload actually 
performed on z/OS, AFAIK (although I have some ideas about how to change 
that...). The only part of a transaction involving a non-z/OS client that would 
ever be ZIIP-eligible is the actual lookup/marshalling of the data done inside 
DB/2 on z/OS. 

If I understand what you're asking, then the two systems (Solaris and Linux on 
Z) attempting to execute the same query via DB2 Connect against a zOS DB2 
server would receive exactly the same treatment; there's nothing special about 
Solaris vs Linux clients in this scenario. That would apply to any 
network-based client, whether using hipersockets or any other transport. 
Workload running directly on z/OS accessing z/OS DB2 would benefit more from 
the ZIIP because the DB2 and z/OS code knows it's there and can exploit it more 
effectively. 

Am I totally misunderstanding you? 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Sent via BlackBerry by AT&T

Re: DB2 on z/VM & SuSE 10.2 "box" sizing - starter values ?

2009-05-19 Thread Tom Duerbusch
We are going into production, tomorrow, with DB2 Connect Server V9.5 fixpack 2. 
 A Cashier system, with about 8 users.

I'm on 160 MB virtual, 1 3390-9 drive, two vdisk swap files (about 10,000 4K 
swap pages used over 3 weeks).
This is on a z/890, with the IFL about 20% busy.  And no VM paging.

Only problem we have had, and can't seem to shoot, yet, is we are using stored 
procedures on DB2/VSE.  About 1 in 10 transactions, has a 20-30 second startup 
time.  Not the first, not the last.  Just blind sided.

Every box in the transaction path, was not having a CPU problem, an IO problem 
or a Paging problem, when we had poor response time.

Did you have that, and did you solve it?

On the DB2 side, my guess is you have too much virtual storage defined.  You 
should have automatic memory management enabled (by default).  So scaling back 
will also scale back the DB2 cache.  Check the hit ratio.  But I wouldn't try 
to keep the hit ratio at the same level as the other platforms.  If you are on 
ficon, you have one hell of a kick ass I/O subsystem.  Trade some memory for 
I/O, within reason.  That gives you more memory for more servers, until you hit 
CPU limitations.

BTW, my test UDB servers are 512 MB each.  Only 2 3390-9 packs for data.  Yep, 
the first time thru, the application ends up loading DB2 cache, and then the 
queries fly.  Testing doesn't need a lot of cache.  I don't have Websphere, so 
I can't consider it's memory loads.

Tom Duerbusch
THD Consulting

>>>  5/19/2009 11:13 AM >>>
I'm finally in the process of firing up some "boxes" under VM to replace
some aging AIX critters and I could use some advise as to real memory and
swap allocations for these new guys.

1st victim was a DB2 connect enterprise unlimited server - cut it to
production 2 weeks ago and everything is good. 246900k of memory and 3G of
swap, so far, so good except for a small issue with java versioning v1.5 on
new box, v1.3 on oldest client app - but that is a java issue that would
show up on any platform with the new version.

2nd victim is a DB2 ESE and WebSphere MQ server - can't even fire up the
tools database because the bufferpools don't have enough memory. This is
where the advise is needed since the IBM docs don't have a recommended
starting point for the z/VM environment ( DB2 9.5 fixpack 3b) and I really
don't want to scarf all of the vm resources for this machine when the grand
plan ( insert evil laugh here ) is to put as many AIX and intel linux onto
this platform (Z) as we can and still maintain decent performance.
Current AIX box is a 2-way power3 box with 16G of memory - low utilization
in a development environment.

3rd victim is a WebSphere AppServer box - 6-way power4 box with 16G memory.
Low to moderate utilization in a development environment with a second
instance of WAS for background production use ( the moderate utilization
part - when it is active ).

Current VM lpar has 2 IFL, 9G cstor, 3G estor, z/VM 5.3 and SuSE 10.2
guests


Thanks,
Bruce

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


z10 and SLES 8

2009-05-01 Thread Tom Duerbusch
In one of the Linux session documentation (PDF), it says that for the z10 
"System z10 Toleration" for SLES 8NO.

Does that mean that you can't run SLES8 on a z10 box?  At all?  
Or that the z10 runs SLES10, for example, better?

I have a few SLES8 images, more historic in nature.  I wouldn't want to kill 
them off, but I'm also not sure that it would be worth the effort to migrate 
them.  Not that a z10 is in the near future, but you still have to plan....

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: vswitch for everyone

2009-04-21 Thread Tom Duerbusch
I'm not sure about this, but if you don't care who gets authorization for 
VSWITCH, then just automate the process for everyone.

For example, when ever you update the directory and DIRECTXA it.  Have a little 
REXX program also kick off to scan all "USER " cards, take the second word on 
that statement and do a GRANT for that user.  If you use DIRMAINT, do a "DIRM 
GET USER DIRECTORY" after updating and execute the same REXX routine.

Also, execute the routine as part of AUTOLOG1 to reissue the grants at IPL time.

My point is that it is an easy enough workaround.  And would be much quicker 
than getting IBM to channel resources into a request that is needed by few (or 
1) licenses.

In my case, I do grants for vswitch for 20 machines at a time.  Right now, I'm 
done for LINUX6X and LINUX7X machines.  (I'm only up to LINUX69 which is a test 
DB2 Connect Server Edition machine).

But then,  I also do the Linux installs, and I'm prompted for vswitch 
authorization by my documentation.  Just like the CREATE DIRECTORY in SFS, the 
GRANT AUTH in SFS for the install software, updating the user direct, etc.  All 
just a part of the initial machine configuration.  I agree that one less step 
is one less step to forget/mess upbut that's what we get paid for. 

Tom Duerbusch
THD Consulting

>>> RPN01  4/21/2009 12:58 PM >>>
The problem is that not everyone wants to purchase an external security
manager simply to get this feature. We have no need for an ESM, as, if one
of our four users get out of line, we can just walk over to their cube and
whack them with a board. I'm not buying an ESM to un-secure a single entity
in an already closed box. That makes no sense at all.

No humans use the box directly, and we grant the vSwitch to just short of
every virtual machine that uses the box. To have to go through the grant
process, no matter if it is in the CP directory, in System Config, or in
Autolog1, for every new machine that gets created, and to open the door for
human error by forgetting to grant this resource, which needs to be
available for everyone on the system, seems at best to be an oversight on
IBM's part.

ESMs are not the solution to this problem. Sorry.

--
Robert P. Nix  Mayo Foundation.~.
RO-OE-5-55 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
"In theory, theory and practice are the same, but
 in practice, theory and practice are different."




On 4/17/09 12:20 AM, "Alan Altmark"  wrote:

> On Thursday, 04/16/2009 at 04:15 EDT, Marcy Cortes
>  wrote:
>
>> Apparently its *someone*'s requirement, but I agree, and optional open
>> Vswitch would be handy indeed.
>
> I should probably get the requirement answered "Available".  As Rob
> mentioned, your ESM is capable of doing this. With RACF, RALTER VMLAN
> SYSTEM.VSWITCH1 UACC(READ).
>
> I have no plan (or much willingness) to spend money to duplicate in CP
> what can be done with an ESM.
>
> Alan Altmark
> z/VM Development
> IBM Endicott
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Oracle Special Interest Group

2009-03-31 Thread Tom Duerbusch
I don't see much of a difference, that is, other than the wording.

>From the Oracle SIG:

Oracle on Linux on System z Workshop:
In conjunction with the Oracle zSeries SIG 2009 Conference, IBM is
hosting a special 'no-charge' workshop for conference attendees who are
considering consolidating and moving Oracle DB workloads to a Linux
environment using z/VM software and System z hardware technology. This
2.5 day workshop is designed for system administrators, DBA's, and
planners considering a move of Oracle to Linux on System z, and offers
practical, hands-on experience through a combination of lectures and
labs to install and configure SUSE Linux. A hands-on customization of
z/VM will be performed to establish an environment for a Linux system.
The workshop includes the installation and customization of Oracle 10g,
and will include a discussion of performance tooling for Linux and z/VM
that can be used to monitor and tune the system environment. 

>From the IBM Announcement:

This 2.5 day class will be of interest to attendees who are considering
a
move of Oracle to Linux on System z machines. Topics will be presented
to
familiarize the attendee with System z hardware technologies. Major
software components will be showcased through lecture and hands on
labs.
Attendees will have the opportunity to perform customization activities
on
key z/VM system files. Linux will be covered through lectures and labs
that
include the install and customization of a SLES 10 64 bit system. The
install and customization of Oracle 10g and the use of DBCA will be
highlighted. Performance is always a topic of keen interest. The class
will
cover performance tools available at the z/VM, Linux and Oracle
levels.
Recommendations will be presented on maximizing Oracle performance on
Linux
on System z. Other topics will cover various options available to
gather
and process performance data.  The Class will  wrap up with a
discussion on
tools and services available for sizing workloads being considered, for
a
move to Linux on System z. Part of this discussion will also encompass
server consolidation to System z machines in addition to a discussion
on
ROI and TCO factors.

Both specify:

1.  From IBM.
2.  No charge.
3.  2.5 days
4.  Moving Oracle to Linux on System z machines.
5.  Install and configure SUSE Linux.
6.  Install and customization of Oracle 10g.
7.  Performance tools.

Sorry, I don't see the difference.

What am I over looking?

Tom Duerbusch
THD Consulting


>>> Barton Robinson  3/31/2009 12:09
PM >>>
I think if you look at the agenda of both, there is quite a
difference.

Tom Duerbusch wrote:
> This SIG seems to be the same education available, free, at other
> locations also.
> 
> See the announcement at the bottom of this email:
> 
> Tom Duerbusch
> THD Consulting
> 
>>>> Barton Robinson  3/30/2009 7:20
> PM >>>
> Any one that is interested in running Oracle on "Z", the conference
> will
> be held in 3 weeks: "http://www.zseriesoraclesig.org/";.
> 
> As part of the very low cost, there are added workshops included in
> the
> price. If interested in Linux and z/VM Performance as well as
Oracle,
> this would be a perfect opportunity.
> 
>
--
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO
LINUX-390
> or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
> 
>
==
> 
>  
 
>   
> Announcing   
 
>   
>  Customizing Linux and the Mainframe for  Oracle DB
> Applications  
>  Available in the
 
>   
>  U.S. during Second Quarter 2009 
 
>   
>  
 
>   
> 
> 
> 
> 
> 
> 
> The Washington Systems Center will offer a new "no-charge" hands-on
> workshop in 2Q 2009.
> 
> 
> Customizing Linux and the Mainframe for  Oracle DB Applications
> (LXOR6)
> 
>WORKSHOP   |REQUESTED LOCATION|   SCHEDULED DATE 
>  -+--+- 
>   |  |  
>  -+--+- 
>  LXOR6|Chicago 

Re: Oracle Special Interest Group

2009-03-31 Thread Tom Duerbusch
This SIG seems to be the same education available, free, at other
locations also.

See the announcement at the bottom of this email:

Tom Duerbusch
THD Consulting

>>> Barton Robinson  3/30/2009 7:20
PM >>>
Any one that is interested in running Oracle on "Z", the conference
will
be held in 3 weeks: "http://www.zseriesoraclesig.org/";.

As part of the very low cost, there are added workshops included in
the
price. If interested in Linux and z/VM Performance as well as Oracle,
this would be a perfect opportunity.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

==

   
  
Announcing 
  
 Customizing Linux and the Mainframe for  Oracle DB
Applications  
 Available in the  
  
 U.S. during Second Quarter 2009   
  
   
  






The Washington Systems Center will offer a new "no-charge" hands-on
workshop in 2Q 2009.


Customizing Linux and the Mainframe for  Oracle DB Applications
(LXOR6)

   WORKSHOP   |REQUESTED LOCATION|   SCHEDULED DATE 
 -+--+- 
  |  |  
 -+--+- 
 LXOR6|Chicago   |May 5 - 7 
 -+--+- 
  |New York City |May 19 - 21   
 -+--+- 
  |Gaithersburg  |June 9 - 11   







(The course description is located at the end of this announcement and
can
also be found at the following IBM Field Education website:


http://www.ibm.com/servers/eserver/zseries/education/topgun/enrollment





This workshop is provided on a "NO FEE" basis. Travel, food and
lodging
expenses are the responsibility of individuals attending the class.

To enroll in workshops follow the steps listed below:
1.Enrollment is By Invitation Only -  Only IBM representatives
are
allowed to enroll their invited customers in this workshop.
2.This is a STANDBY course and as such all students are
initially
placed in a PENDED status until all steps below have been completed.
3.IBMers are encouraged to attend with their customers and
must
register using the same procedures.



  Go to the WEBSITE:
http://www.ibm.com/servers/eserver/zseries/education/topgun/enrollment

● Click the radio button for "Wildfire Workshops" ... you
automatically
move to the next panel:
● Click the radio button for the workshop in which  you wish to
enroll
your customer  ... you automatically move to the next panel:
● At the top of the page the sessions will be displayed.  Click
on the
location and date for the workshop of your choice.
● Complete all enrollee information.
● Provide a valid Siebel Opportunity Number...Note:  without
this
information, the student will not be confirmed for attendance.
● Click on the "Submit" button when all information has been
provided.
4.Because space is limited, enroll up to two customers per
account.


5.Qualified customers will be removed from PENDED status and
receive a
follow up note that confirms their enrollment.


If you have questions, please contact  Judith A Ramage
(rama...@us.ibm.com)
or call 301.240.3966 / TL 372.





   Workshop Description & Details














Course Description





This 2.5 day class will be of interest to attendees who are considering
a
move of Oracle to Linux on System z machines. Topics will be presented
to
familiarize the attendee with System z hardware technologies. Major
software components will be showcased through lecture and hands on
labs.
Attendees will have the opportunity to perform customization activities
on
key z/VM system files. Linux will be covered through lectures and labs
that
include the install and customization of a SLES 10 64 bit system. The
install and customization of Oracle 10g and the use of DBCA will be
highlighted. Performance is always a topic of keen interest. The class
will
cover perform

Re: LVM Striping and RAID for performance?

2009-03-27 Thread Tom Duerbusch
If you really want performance, the LVM should be stripped across RAID Arrays.  
And make sure you use multiple controllers.

(i.e. in the DS6800 there are two controllers, each has half the cache.  If you 
do LVM across both controllers, you have the potential of using all the cache, 
not just half.).

Tom Duerbusch
THD Consulting

>>> Fred Schmidt  3/27/2009 12:17 AM >>>
Would somebody please clarify - is LVM striping still of benefit to
performance if your disk is RAID? 

If so, why? 

Regards, 
Fred Schmidt
NT Government, Australia


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Old IBM Mainframe - Still Useful?

2009-03-23 Thread Tom Duerbusch
The electrical bill isn't worth the box.

You can buy a used MP3000 with integrated dasd for a few thousand.  And it uses 
way less power.
Of course, the licensing issues still remain.

Multiprise 3000--Dollars
--
7060-H30$3,800
7060-H50$4,300 
7060-H70$7,000

Tom Duerbusch
THD Consulting

>>> Andrew Wiley  3/22/2009 4:14 PM >>>
I'm trying to research the usefulness of an older IBM mainframe in a
computer science class. The mainframe in question is an IBM 9672 RB6, which,
as I understand it, was first sold in 1998. So it's reasonably old.
Would this machine be able to run a few VM's of linux, or is it too old, or
would it depend on the specs? I haven't been able to find much information,
although I've been googling it for a few days. Sorry if this is a broad
question; any help you can offer me would be greatly appreciated.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Why does one need to mkinitrd/zipl ? (WAS : Broken logical volume group)

2009-02-18 Thread Tom Duerbusch
I don't believe that there is a performance problem with thousands of
volumes that are there.  The performance problem is with the thousands
of volumes that are not there.

In my case, when we brought in an IBM DS6800, I defined to the IOCP,
that there were 150 volumes on each of the 8 controllers.  However, I
only define volumes on the DS6800 when I need them.  Some are mod-3s,
some are mod-9s, even some 3380s.

I'm running VM, so my guests only see the volumes I give the guest
machine, but in the native VSE world, when dynamic device sensing came
into play, and VSE was in a LPAR, VSE IPLs took forever.  Seemed that
each, non-existant device, waited for the missing interrupt handler to
get tripped, before it would sense the next volume.  I seem to recall
that this was also a problem in VM.  

My guess, is that Linux would have the same problem.  So you wouldn't
want Linux to go out sensing all devices when you are in a LPAR world.

Tom Duerbusch
THD Consulting

>>> Ivan Warren  2/18/2009 2:03 PM >>>
Mark Post wrote:
> Basically, some historical, performance, and data integrity reasons.
>
>   
Ok,

I'm starting to get a better picture now (as to the how & why). As I 
understand it, the bases are :

- An LPAR may have thousands of volumes allocated to it, not all of
them 
being for Linux use.
- Volumes not intended for linux use may be accidentally stepped on..
- IPL time ensuing from having many thousand devices (Lordy.. Issuing 
Sense-ID, RDC and read of cyl 0 track 1 - on 1 devices shouldn't 
take but a couple seconds anyway.. What's wrong here.. all modern OSes

do this in a routine manner.. why wouldn't Linux be any different ? 
(ok.. maybe I should check with the folks in Böblingen) - is taking an

inordinate amount of time.

Right..

However (you knew that was coming right ?).. And besides the 
'historical' portion which is.. well.. historical..

In an LPAR environment where you (may) have thousands of volumes, with

maybe a few percent for Linux use (which is probably a bad idea to
start 
with - but I digress).. Why doesn't mkinitrd *ONLY* take care of the
IPL 
volume (or volumes in you're LVM).. - as the initrd was designed - then

- depending on what config is on the root fs, enable this or that
volume 
- once control has been passed to the root fs hosted init (and the 
pivotroot() has been done) ? The list of configured volumes (those that

are designated for linux use) are bound to be available on the root fs

anyway - so why not do it *then* (and not during the time when the init

on the initrd is in control) ? IIRC, /proc has enough control over 
dasd_eckd (which is the really the one at issue here I think) to ask it

to vary online this or that volume *even* after the initialization
phase.

Then of course, you have the VM environment.. which is going (by
design) 
to be especially designed for your environment.. Adding a volume should

only be a matter of adding a minidisk to the user's directory (or maybe

a link if the fs is designed to be RO.. through the directory or maybe
a 
CMS profile..).. and modifying the fstab.. Having to alter the initrd 
seems to me like a unnecessary and superfluous step.

What I am saying, is that, eventually, you're going to wind up with 
people running zipl/mkinitrd no matter what.. but there is *some* 
(minor) danger to this ! (actually, of course, it's going to be 
mkinitrd/zipl - in that order.).. if one step succeeds and the next 
fails - the system can't be booted *AT ALL* (basically - try running 
mkinitrd without running zipl !).. Not to mention that - if you
consider 
this as a guardrail.. it fails to accomplish its goal once you have 
everyone routinely doing this (the safety door 'symptom'.. being
blocked 
opened by a sander block because people are tired of swapping their 
badge to open a door they go through 100 times a day - also look at the

infamous Vista UAC :P )..

Again, this is *not*.. (I repeat .. *NOT*) a rant.. just throwing in 
ideas of how I think things could (or maybe .. should ?) be done - on a

mainframe.. with a mainframe population - used to have complete control

of their environment (be it the whole thing, an LPAR or a virtual 
machine) - in a linux environment.

And again.. Mark.. Thanks again for being here - and - in this 
particular case, taking the time to answer my inquiries ! And of
course, 
I'm just waiting to be proven wrong and change my mind about the whole

darn thing !

--Ivan


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Module is unknown when signing on

2009-01-30 Thread Tom Duerbusch
You're rightIt is only on the Oracle machines.

And /var/log/messages shows:


n[2636]: PAM unable to dlopen(/lib/security/pam_limits.so)
n[2636]: PAM [error: /lib/security/pam_limits.so: wrong ELF class: ELFCLASS32]
n[2636]: PAM adding faulty module: /lib/security/pam_limits.so
n[2636]: Module is unknown


And then looking at /etc/pam.d/login:

linux63:/etc/pam.d # cat login
#%PAM-1.0
auth required   pam_securetty.so
auth includecommon-auth
auth required   pam_nologin.so
account  includecommon-account
password includecommon-password
session  includecommon-session
session  required   pam_lastlog.so nowtmp
session  required   pam_resmgr.so
session  optional   pam_mail.so standard
session  required   /lib/security/pam_limits.so
session  required   pam_limits.so
linux63:/etc/pam.d #

Anyway, that is what was in the Oracle 10g documentation.

Tom Duerbusch
THD Consulting


>>> Mauro Souza  1/30/2009 2:53 PM >>>
I had this issue after tunning a system to install Oracle. I believe it's
related to PAM modules. If you're able to logon via SSH, you can get rid of
the changes in limits.conf.
In /var/log/messages maybe there's some messages telling you what kind of
module is missing...

Mauro
http://mauro.limeiratem.com - registered Linux User: 294521
Scripture is both history, and a love letter from God.


On Fri, Jan 30, 2009 at 6:45 PM, Tom Duerbusch
wrote:

> On some of my images (SLES 10 SP 2), when I try to logon from the console,
> I get:
>
> Last login: Wed Jan 21 16:20:38 CST 2009 from nss-lt-0001.stlouiscity.comon 
> pts
> /0
> You have new mail.
>
> Module is unknown
>
>
> Welcome to SUSE Linux Enterprise Server 10 SP2 (s390x) - Kernel
> 2.6.16.60-0.21-d
> efault (ttyS0).
>
>
> linux62 login:
>
>
> The "Module is unknown" seems to be a problem.  I immediately get signed
> off and I go back to the logon prompt.
>
> I haven't applied any maintenance other then the SP2.  And it doesn't
> happen on all images.  What could cause that?
>
> Thanks
>
> Tom Duerbusch
> THD Consulting
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Duplicate IP question

2009-01-30 Thread Tom Duerbusch
Under SLES 10 SP 2, if you bring up an image that has the same IP address as a 
currently running system, is linux smart enough not to flood the network with 
packets with duplicate IP addresses?

It does figure something out, saying that it can not register the IP address, 
but does it keep trying and flood the network and/or mess up communications 
with the box that was running initially?

In the old days, when a Windows box came online with a duplicate IP address, 
the entire network would suffer.  

What happens in the Linux world?

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Module is unknown when signing on

2009-01-30 Thread Tom Duerbusch
On some of my images (SLES 10 SP 2), when I try to logon from the console, I 
get:

Last login: Wed Jan 21 16:20:38 CST 2009 from nss-lt-0001.stlouiscity.com on pts
/0  
You have new mail.  

Module is unknown   


Welcome to SUSE Linux Enterprise Server 10 SP2 (s390x) - Kernel 2.6.16.60-0.21-d
efault (ttyS0). 


linux62 login:  


The "Module is unknown" seems to be a problem.  I immediately get signed off 
and I go back to the logon prompt.  

I haven't applied any maintenance other then the SP2.  And it doesn't happen on 
all images.  What could cause that?

Thanks

Tom Duerbusch
THD Consulting

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Minidisks and DASD model 3/9

2009-01-30 Thread Tom Duerbusch
Among all the other thoughts that were brought upit depends on what
performance you want.

For example, I have some test Linux systems, that are a single 3390-9. 
All software is on it and data.  Just something easy to play with. 
However it is limited as the volume can only be on a single Raid Array. 
And every Raid Array can end up bottlenecking on I/O, if there is
sufficient I/O.  Your other Raid Arrays (I have 8 arrays), may be idle.

>From a performance side, Oracle seems to like to manage its own disks,
and of course, each disk being on separate Raid Arrays.  However, from a
management side, putting the disks in a LVM and giving it to Oracle,
saves people costs.  So, how busy is the Oracle?  If you are not
pounding it, then use LVM and save people costs.  

That type of consideration also plays for any application that has
large pools of disk space, Oracle, DB2, Samba, NFS, FTP Server, etc.

Raid Arrays really eliminated most of the dasd performance tuning we
use to do on a volume level.  But you still may need to do it on a Raid
Array level.

You may want to try this experiment:

Have Linux format 3 volumes.  Two volumes on the same Raid Array, and
the third on a different Raid Array.  When Linux is formatting all three
volumes at the same time, it has a "completion graph" shown.  The single
volume is formatted much faster then the other two.  I know, twice as
many I/Os should take longer, plus you are doing head seeks to boot. 
But it shows there is a difference in performance when you are on an
Array that is highly used, vs lightly used.

Tom Duerbusch
THD Consulting

>>> עופר ברוך  1/29/2009 6:18 AM >>>
Hi all,

 

We just bought some dasd storage specifically for z/VM and for the
z/Linux
underneath.

This is the time to decide what model to use 3/9/27… Originally I
thought
"the bigger the better".

We are using DIRMAINT to manage dasd space and now I am not sure what
is the
best approach.

Here are my thoughts about large models:

1.   I am concerned about fragmentation. Using model-3 I could just
play
around with full disks not worrying about fragmentation (meaning –
adding
storage to a Linux guest will add a full model-3)

2.   I am concerned about IOSQ time. I know z/VM supports static
PAV but
that is just not comfortable… We don't have Hyperpav yet…

3.   Using big minidisks will make cloning difficult, (I must have
the
same big gaps available for cloning).

Here are my thoughts about model-3:

1.   Many Many device addresses…

Am I missing something? The more I think about it the more I believe
that
model-3 is the correct answer…

 

Can you please help me out here?

 

Thanks,

Offer Baruch.


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Good editor for under the 3270 console interface

2009-01-28 Thread Tom Duerbusch
Ok got around the problem of "ed" not being on the rescue system.

Use "ed" from the installed system (duh).

/mnt/bin/ed filename

So far, my testing using the rescue system, "ed" is the right tool.

Thanks

Tom Duerbusch
THD Consulting

>>> Dave Jones  1/28/2009 1:39 PM >>>
Tom, for most 'simple' file editing that needs to be done from the 3270 console 
(i.e.,
before the network is available), I've found that the 'ed' editor works well. 
It's command
set is small and easy for me to remember.

Jack Woehr wrote:
> Tom Duerbusch wrote:
>> What is a good line mode editor?
>>
> ex is the traditional Unix line mode editor, written for just such
> environments.
> It's the dark side of vi :)
>
> man ex
>
> --
> Jack J. Woehr# I run for public office from time to time.
> It's like
> http://www.well.com/~jax # working out at the gym, you sweat a lot,
> don't get
> http://www.softwoehr.com # anywhere, and you fall asleep easily afterwards.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 

--
DJ

V/Soft
   z/VM and mainframe Linux expertise, training,
   consulting, and software development
www.vsoft-software.com 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Good editor for under the 3270 console interface

2009-01-28 Thread Tom Duerbusch
However, it is not on the rescue system.

Tom Duerbusch
THD Consulting

>>> Dave Jones  1/28/2009 1:39 PM >>>
Tom, for most 'simple' file editing that needs to be done from the 3270 console 
(i.e.,
before the network is available), I've found that the 'ed' editor works well. 
It's command
set is small and easy for me to remember.

Jack Woehr wrote:
> Tom Duerbusch wrote:
>> What is a good line mode editor?
>>
> ex is the traditional Unix line mode editor, written for just such
> environments.
> It's the dark side of vi :)
>
> man ex
>
> --
> Jack J. Woehr# I run for public office from time to time.
> It's like
> http://www.well.com/~jax # working out at the gym, you sweat a lot,
> don't get
> http://www.softwoehr.com # anywhere, and you fall asleep easily afterwards.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390 

--
DJ

V/Soft
   z/VM and mainframe Linux expertise, training,
   consulting, and software development
www.vsoft-software.com 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Good editor for under the 3270 console interface

2009-01-28 Thread Tom Duerbusch
man ex
gives me the man page for vi.
ex being an short way to start up vi and put it in "ex mode".

And when I try "ex", yep, that's vi .

Tom Duerbusch
THD Consulting

>>> Jack Woehr  1/28/2009 12:58 PM >>>
Tom Duerbusch wrote:
> What is a good line mode editor?
>
ex is the traditional Unix line mode editor, written for just such
environments.
It's the dark side of vi :)

man ex

--
Jack J. Woehr# I run for public office from time to time. It's like
http://www.well.com/~jax # working out at the gym, you sweat a lot, don't get
http://www.softwoehr.com # anywhere, and you fall asleep easily afterwards.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Clone SUSE 10

2009-01-28 Thread Tom Duerbusch
There are two places where the disk needs to be changed:

/etc/fstab
/etc/zipl.conf

The zipl.conf is used for booting and on the way up, the fstab is used.

linux63:/etc # cat /etc/fstab
/dev/disk/by-path/ccw-0.0.0150-part1/   ext3
acl,user_xattr 1 1 <
/dev/dasdd1 swapswappri=60 0 0
/dev/dasde1 swapswappri=50 0 0
proc/proc   procdefaults 0 0
sysfs   /syssysfs   noauto 0 0
debugfs /sys/kernel/debug   debugfs noauto 0 0
devpts  /dev/ptsdevpts  mode=0620,gid=5 0 0
/dev/system/system  /home   ext3acl,user_xattr 1 2
linux59.stlouiscity.com:/home/nfs/oracle102 /oracle/images  nfs 
defaults 0 0
linux59.stlouiscity.com:/home/nfs/oracletmp /oracletmp  nfs 
defaults 0 0
linux59.stlouiscity.com:/home/nfs/oraclebkup/oraclebkup nfs 
hard,bg,suid,rsize=32768,wsize=32768 0 0
linux63:/etc # cat /etc/zipl.conf
# Modified by YaST2. Last modification on Tue Aug 12 21:25:38 UTC 2008
[defaultboot]
defaultmenu = menu


:menu
default = 1
prompt = 1
target = /boot/zipl
timeout = 10
1 = ipl
2 = Failsafe

###Don't change this comment - YaST2 identifier: Original name: ipl###
[ipl]
image = /boot/image
target = /boot/zipl
ramdisk = /boot/initrd,0x100
parameters = "root=/dev/disk/by-path/ccw-0.0.0150-part1   TERM=dumb"
<===

###Don't change this comment - YaST2 identifier: Original name: failsafe###
[Failsafe]
image = /boot/image-2.6.16.60-0.21-default
target = /boot/zipl
ramdisk = /boot/initrd-2.6.16.60-0.21-default,0x100
parameters = "root=/dev/disk/by-path/ccw-0.0.50-part1   TERM=dumb 3" 
<
linux63:/etc #

And, as I now see, the second set under [failsafe] is wrong.  It should be 
0.0.150, not 0.0.50 .

I use Yast (partitioning) to change the fstab file.
I manually must change the zipl.conf file.
mkinitrd
zipl

boot and pray



Tom Duerbusch
THD Consulting


>>> "Jones, Russell"  1/28/2009 12:15 PM >>>
My zipl.conf now looks like this:
*
ANPLNX02:/etc# cat zipl.conf
# Modified by YaST2. Last modification on Tue Nov 25 11:10:32 CST 2008
[defaultboot]
defaultmenu = menu

###Don't change this comment - YaST2 identifier: Original name: linux###
[SLES_10_SP1]
image = /boot/image-2.6.16.54-0.2.10-default
target = /boot/zipl
ramdisk = /boot/initrd-2.6.16.54-0.2.10-default,0x100
parameters = "root=/dev/disk/by-path/ccw-0.0.322b-part1
dasd=322b,322f TERM=dumb"


:menu
default = 1
prompt = 1
target = /boot/zipl
timeout = 10
1 = SLES_10_SP1
2 = ipl

###Don't change this comment - YaST2 identifier: Original name: ipl###
[ipl]
image = /boot/image
target = /boot/zipl
ramdisk = /boot/initrd,0x100
parameters = "root=/dev/disk/by-path/ccw-0.0.322b-part1
dasd=322b,322f TERM=dumb"
**
And my myinitrd output looks like this:
**
ANPLNX02:/etc# mkinitrd
Root device:/dev/disk/by-path/ccw-0.0.322b-part1 (mounted on / as
ext3)
Module list:jbd ext3 dasd_eckd_mod (xennet xenblk)

Kernel image:   /boot/image-2.6.16.54-0.2.10-default
Initrd image:   /boot/initrd-2.6.16.54-0.2.10-default
Shared libs:lib64/ld-2.4.so lib64/libacl.so.1.1.0
lib64/libattr.so.1.1.0 lib64/libblkid.so.1.0 li
b64/libc-2.4.so lib64/libcom_err.so.2.1 lib64/libdl-2.4.so
lib64/libext2fs.so.2.4 lib64/libhistory.so
.5.1 lib64/libncurses.so.5.5 lib64/libpthread-2.4.so
lib64/libreadline.so.5.1 lib64/librt-2.4.so lib6
4/libuuid.so.1.2 lib64/libnss_files-2.4.so lib64/libnss_files.so.2
lib64/libgcc_s.so.1 
Driver modules: dasd_mod dasd_eckd_mod 
DASDs:   0.0.322a(ECKD) 0.0.322e(ECKD) 0.0.322b(ECKD)
Filesystem modules: jbd ext3 
Including:  initramfs fsck.ext3
16371 blocks

initrd updated, zipl needs to update the IPL record before IPL!


I changed my dasd fstab setting to use device path, ipl'ed my base
system, shut it down, copied the volume using mvs dss copy, ipl'ed the
base system and mounted the cloned file system on the base system. I
then did a chroot to the new file system. 

The mkinitrd command looks good except for the "DASDs" section. It is
getting those addresses from what I have mounted on the base system and
I want it to use 322b and 322f.

Russell Jones 
ANPAC
System Programmer
rjo...@anpac.com 


-Original Message-
From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of
Mark Post
Sent: Wednesday, January 28, 2009 10:12 AM
To: LINUX-390@VM.MARIST.EDU 
Subject: Re: Clone SUSE 10

>>> On 1/28/2009 at  9:58 AM, "Jones, Russell" 
wrote: 
> Should the parameters in /etc/zipl.conf look like:
> 
>  root=/dev/dasda1 ro noinitrd

No.  If you do, your system won't boot.  Just add 'dasd=addr1,addr2" to
what is there now.


Mark P

Re: Clone SUSE 10

2009-01-28 Thread Tom Duerbusch
Yes, if you are using "device name" in the fstab.

There are other parms available also, and I don't know what they will generate 
in the zipl.conf and fstab files.

I was getting burnt by the defaults changing to "device id", which really 
messes up cloning, disaster recovery, moving to a different raid array, etc.

I like the "device path" option, mostly due to being a VM bigot (since the mid 
'70s).  My boot device is 150.  Data drives are 151 and up.  LVM is the 160 
range.  etc.

If I recall correctly, when you use the "device name", the device name dasda1, 
is assigned to the lowest disk drive, dasdb1 to the next lowest, etc.  Add a 
new low drive, say vdisk swap space and you won't be able to IPL.  Novice 
mistake for sure.  But who hasn't hit it?

Many times, it is just what you get use to .

Tom Duerbusch
THD Consulting

>>> "Jones, Russell"  1/28/2009 8:58 AM >>>
Should the parameters in /etc/zipl.conf look like:

 root=/dev/dasda1 ro noinitrd

Russell Jones 
ANPAC
System Programmer
rjo...@anpac.com 

-Original Message-
From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of
Tom Duerbusch
Sent: Tuesday, January 27, 2009 5:08 PM
To: LINUX-390@VM.MARIST.EDU 
Subject: Re: Clone SUSE 10

When you installed you SLES 10 system, you took the defaults for FSTAB.
Change you base system:

yast
System
Partitioner, yes
edit each disk
fstab options   
change Device ID to Device Path, ok
ok
apply
quit

Then:

In one session:
cat /etc/fstab

In another session
joe /etc/zipl.conf
modify the parameters = "root=~
may be in multiple places
save
mkinitrd
zipl

Boot to put in effect

Now flashcopy will produce a bootable copy.

Now reclone your system.  

Tom Duerbusch
THD Consulting

>>> "Jones, Russell"  1/27/2009 3:54 PM >>>
I do mean root file system. Here is my zipl.conf. I don't see any
device
address in this file. 

***
# Modified by YaST2. Last modification on Tue Nov 25 11:10:32 CST 2008
[defaultboot]
defaultmenu = menu

###Don't change this comment - YaST2 identifier: Original name:
linux###
[SLES_10_SP1]
image = /boot/image-2.6.16.54-0.2.10-default
target = /boot/zipl
ramdisk = /boot/initrd-2.6.16.54-0.2.10-default,0x100
parameters =
"root=/dev/disk/by-id/ccw-STK.028176.0022.2a-part1 TERM=dumb"


:menu
default = 1
prompt = 1
target = /boot/zipl
timeout = 10
1 = SLES_10_SP1
2 = ipl

###Don't change this comment - YaST2 identifier: Original name: ipl###
[ipl]
image = /boot/image
target = /boot/zipl
ramdisk = /boot/initrd,0x100
parameters =
"root=/dev/disk/by-id/ccw-STK.028176.0022.2a-part1  
TERM=dumb"

***



Russell Jones 
ANPAC
System Programmer
rjo...@anpac.com 

-Original Message-
From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of
Mark Post
Sent: Tuesday, January 27, 2009 3:43 PM
To: LINUX-390@VM.MARIST.EDU 
Subject: Re: Clone SUSE 10

>>> On 1/27/2009 at  4:38 PM, "Jones, Russell"

wrote: 
-snip-
> I want 322b to be my ipl volume and also bring 322f online at
startup.

By IPL volume, do you mean root file system?  (The two are completely
different things.)  What do you have in /etc/zipl.conf?  


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Clone SUSE 10

2009-01-27 Thread Tom Duerbusch
When you installed you SLES 10 system, you took the defaults for FSTAB.
Change you base system:

yast
System
Partitioner, yes
edit each disk
fstab options   
change Device ID to Device Path, ok
ok
apply
quit

Then:

In one session:
cat /etc/fstab

In another session
joe /etc/zipl.conf
modify the parameters = “root=~
may be in multiple places
save
mkinitrd
zipl

Boot to put in effect

Now flashcopy will produce a bootable copy.

Now reclone your system.  

Tom Duerbusch
THD Consulting

>>> "Jones, Russell"  1/27/2009 3:54 PM >>>
I do mean root file system. Here is my zipl.conf. I don't see any
device
address in this file. 

***
# Modified by YaST2. Last modification on Tue Nov 25 11:10:32 CST 2008
[defaultboot]
defaultmenu = menu

###Don't change this comment - YaST2 identifier: Original name:
linux###
[SLES_10_SP1]
image = /boot/image-2.6.16.54-0.2.10-default
target = /boot/zipl
ramdisk = /boot/initrd-2.6.16.54-0.2.10-default,0x100
parameters =
"root=/dev/disk/by-id/ccw-STK.028176.0022.2a-part1 TERM=dumb"


:menu
default = 1
prompt = 1
target = /boot/zipl
timeout = 10
1 = SLES_10_SP1
2 = ipl

###Don't change this comment - YaST2 identifier: Original name: ipl###
[ipl]
image = /boot/image
target = /boot/zipl
ramdisk = /boot/initrd,0x100
parameters =
"root=/dev/disk/by-id/ccw-STK.028176.0022.2a-part1  
TERM=dumb"

***



Russell Jones 
ANPAC
System Programmer
rjo...@anpac.com 

-Original Message-
From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of
Mark Post
Sent: Tuesday, January 27, 2009 3:43 PM
To: LINUX-390@VM.MARIST.EDU 
Subject: Re: Clone SUSE 10

>>> On 1/27/2009 at  4:38 PM, "Jones, Russell"

wrote: 
-snip-
> I want 322b to be my ipl volume and also bring 322f online at
startup.

By IPL volume, do you mean root file system?  (The two are completely
different things.)  What do you have in /etc/zipl.conf?  


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Z/Linux CKD DASD migration from one DASD Subsystem to another

2009-01-23 Thread Tom Duerbusch
And if I remember correctly, when Mark joined Novell, we had this little 2 GB 
memory problem in VM.  That would force many large zLinux shops to have 
production running in LPARs.  They could still leave test and smaller zLinux 
images under z/VM.  

But the 2 GB line is no longer much of a problem.

Tom Duerbusch
THD Consulting

>>> David Boyes  1/22/2009 3:07 PM >>>
On 1/21/09 5:09 PM, "Mark Post"  wrote:

>>>> On 1/21/2009 at  2:31 PM, David Boyes  wrote:
> -snip-
>> If I haven't said it before, I don't think there's much reason to ever
>> consider LPAR deployment of Linux, but others do disagree with that view.
>> I'm sure there are workloads where it would matter, but I still think the
>> manageability loss dramatically overwhelms any cost advantage from omitting
>> VM.
>
> When I first joined Novell, I was surprised to learn that the world's largest
> implementation was done all in LPARs.  From what I was told, that wasn't
> because of the dollar cost of z/VM, but the overhead.  Given the number of
> processors running, I could (somewhat) understand that, but to me that says
> that "people time is free" is the attitude, and that leads to another whole
> set of problems.

True enough. There's always exceptions, but they are just that: exceptions.
Right tool, right job, and LPAR is only rarely the right tool to make Linux
on Z interesting.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Z/Linux CKD DASD migration from one DASD Subsystem to another

2009-01-21 Thread Tom Duerbusch
Hi David

You said that if you used dedicated disks, you will now pay for it.
I'm interested in "why you would pay for it"?

Your procedure works the same with dedicated dasd, except instead of linking 
the disk, you attach the disks.

Now, if you were thinking LPAR, when then, yes, I agree it is much more 
complicated.

Thanks

Tom Duerbusch
THD Consulting

>>> David Boyes  1/21/2009 12:42 PM >>>
On 1/21/09 12:02 PM, "tony...@bellsouth.net"  wrote:

> Hi all,
> I would like to know what ways you have used to migrate z/Linux CKD DASD
> volumes from one DASD subsystem to another?  Thanks.

If you used minidisks (the right way, IMHO) then you:

1) Allocate new minidisks on the new array using a knowable pattern, eg if
you have a 150 on the existing guest, allocate a new minidisk at F150 on the
userid. Do this for all the minidisks on that userid.

2) Shut the guest down. You need to do this to get a good copy.

3) From an appropriately privileged ID (MAINT, etc):

LINK guest 150 150 RR  (you don't need/want write access to this volume)
LINK guest F150 F150 MR(you're going to overwrite this one, so write)

4) DDR the contents of one to the other:

DDR
SYSPRINT CONS
INPUT 150 3390 SCRTCH
OUTPUT F150 3390 SCRTCH
COPY ALL


5) DETACH 150
   DETACH F150

6) Repeat #3 and #4 for all the other minidisks for that userid.

7) Update the CP directory and swap the MDISK definitions for the 150 and
F150 MDISKs (make the old one F150, and the new one 150). Repeat for all
minidisks on that userid. Write the CP directory either by hand or using
your directory manager. If you want, you can just comment the old disks out
in the directory entry in case you need to switch back for some reason.

8) IPL the guest as normal. That id is now running on the new disks.

9) Deallocate the Fxxx disks. If you commented them out in step 8, they are
now free disk space until you overwrite them or reallocate the space.

10) Repeat for all guests.

If you used dedicated volumes, now you pay for it. There is a procedure on
linuxvm.org to do this -- you get to do it the hard way.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: [BULK] Z/Linux CKD DASD migration from one DASD Subsystem to another

2009-01-21 Thread Tom Duerbusch
One thing to consider if you are on SUSE 10, the fstab defaults changed.  And 
that stops you from copying/moving from one disk to another, even on the same 
DASD subsystem.

Go into:

yast
System
Partitioner
edit each disk
FSTAB options
change Device ID to Device Path
ok
ok
apply

Go into /etc/zipl.conf
   modify the parameters "root=~"
   save
mkinitrd
zipl



Or Rich has a RPM, that when loaded and executed will do the same thing.

Tom Duerbusch
THD Consulting

>>>  1/21/2009 11:02 AM >>>
Hi all,
I would like to know what ways you have used to migrate z/Linux CKD DASD 
volumes from one DASD subsystem to another?  Thanks.

--


Respectfully, 

Anthony Mungal 
email: tony...@bellsouth.net 
phone: (561) 504-9212 
fax: (801)607-6787

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Encryption on a 7060

2009-01-13 Thread Tom Duerbusch
There are not many holes, but things to consider.  We had a MP3000 H30 also.

1.  It doesn't perform Linux stuff as well as other mainframes.  There is a CPU 
instruction added in newer systems, that made Linux performance much better.  
So, don't take poor performance on the MP3000 as an indication of performance 
on new boxes.  But if you lave the CPU time available, it works.

2.  You have to run SLES 7 or SLES8 (in 31 bit mode).  As these are older 
distros, they may run out of support.  That may affect how auditors view the 
setup.

3.  As 31 bit code disappears, you may not be able to keep up with the Jones 
with respect on where you are sending the files.  I don't know how backleveled 
encryption software goes.  You might be limited to 128 bit encryption instead 
of 2k encryption keys.

4.  I have a GPG server which encripts files for our VSE systems.  It runs in 
96 MBs, with vdisk for swapping.  You shouldn't have problems getting that much 
real memory carved out.

Tom Duerbusch
THD Consulting

>>> "David L. Craig"  1/13/2009 12:00 PM >>>
I curate a museum which includes a uni-CP Multiprise 3000
(7060-H30) with 2 GB in basic mode running VM/ESA 2.2 and
hosting VSE/ESA 2.2 in V=R.  We may be required by auditors
to encrypt files for transmission to other hosts.  I'm
saying it's feasible to install a Linux distribution into
a V=V virtual machine and perform the encryption there,
either with gpg or by using openssl.  We're currently
averaging about 20% CPU utilization.  Can anyone see any
holes in this?  Do current distros still support this
platform or will I need something older, and if so, will
current encryption software work on the older distro?

--

May the LORD God bless you exceedingly abundantly!

Dave Craig

-  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
"'So the universe is not quite as you thought it was.
 You'd better rearrange your beliefs, then.
 Because you certainly can't rearrange the universe.'"

--from _Nightfall_  by Asimov/Silverberg

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


  1   2   3   4   5   6   7   8   >