Re: Ang: Re: DB2: SSD vs more RAM

2010-11-22 Thread Henrik Ahlgren

With limited budget and spinning disks you often have to choose security only. 
But it's somewhat an illusion, since disks are so slow that your system really 
doesn't work: your expiration cycles take days to complete (at least with 
fragmented db - maybe TSM6 is much better), and doing many simultaneusly 
restores (tons of small files) during disaster recovery takes forever when your 
db disk setup is the bottleneck.

Short stroking Intel MLC SSD:s makes them last longer specifically with random 
writes. Of course they still wear out faster than SLC, so heavy database 
workload propably would be too much. (But they are used for many server 
workloads) With SLC it's almost a non-issue.

Data loss is never acceptable, but for some organizations outages just might 
be. After all, many run TSM in single server without any H/A clustering.

Anyway I have to agree that if your db is 200GB and budget allows six 
performance disks, go with disks. if the DB would be 50 GB and same budget 
(let's say $1500) I would not even think using disks instead of SSD.

--- Original message ---

From: Daniel Sparrman 
To: ADSM-L@VM.MARIST.EDU
Sent: 2010-11-23,  1:01

Skipping raid/mirroring is probably the worst thing he could do. When the 
database hits that size, you can expect your organisation to want the TSM 
server up at all times. A disk outage would not really meet that requirement.

SSD disks only have a longer lifespan during lots of reads / less writes. In a 
TSM environment, that wouldnt be true, thus not giving the SSD's a longer 
lifespan.

a) Secure your TSM server, it's your lifeline whenever everything else goes 
wrong

b) Go for performance

Never turn those 2 points the other way around.

Regards

Daniel Sparrman

-"ADSM: Dist Stor Manager"  skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Henrik Ahlgren 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 11/22/2010 23:27
Ärende: Re: DB2: SSD vs more RAM

Yep, doing a 200 GB database with high-end and reduntant SLC NAND is definedly 
not cheap (let alone the 2 TB Bill Colwell described in his post). Not that 
bunch of 15K disks and the power to run them is free, either. Cost per IOPS for 
disk is actually terrible.

Just a thought (don't take too seriously!): what if you'd be willing to take the risk and 
forget RAID/mirroring, after all solid state is pretty reliable these days. Of course - 
in addition to perfect DB backup strategy you need anyhow - put your transaction logs to 
different disk: spinning disk is great for that (it's more or less sequential I/O).  And 
maybe even use cheaper MLC NAND - if you "short stroke" (google for intel ssd 
overprovisioning) a 160 GB X25-M down to 128 GB, you get three times the endurance, so it 
should last for quite a long time even with DB workload. Of course, like tapes, you have 
to treat NAND media as consumables, and keep an eye on the S.M.A.R.T. media wearout 
indicators.

Yeah, too radical and risky for most. We'll just have to wait for couple of 
years more to finally get rid of rotating rust for random IO usage once and for 
all.

On Nov 22, 2010, at 7:13 PM, Pretorius, Louw  wrote:


Well as I was specing a new TSM server i thought, why not try for the best 
performance possible and although the SSD drives drives the server costs up by 
50% it wasn't out of the ballpark, therefore I wanted to hear from the 
community what their ideas were.

As it stands I have a 100GB DB currently ~50% used but according to IBM TSM 6.2 
will require double DB size hence 200GB and since we are expecting a 40% 
data-growth next year and will be implimenting Dedupe I thought why not see how 
the price/performance goes on other sites.

As db2 is a fully featured DB I thought that the alternative would be to give 
it more RAM, as it's so much cheaper than SSD's.  And also how I could 
configure my DB2 to use the extra RAM that I will be throwing at it in any case.

With the current feedback I will be sticking to 6 x SAS 15K and 24GB RAM...

Please if there's any other opinions let's hear it, the more opinions the more 
wisdom...

Regards
Louw Pretorius

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Henrik 
Ahlgren
Sent: 22 November 2010 11:25
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DB2: SSD vs more RAM

Or maybe he has a huge amount of DB entries?  If his options are either six SAS 
15K or eight SSDs (50GB each), it means his DB is propably in the multi-hundred 
gigabyte range. If he just needs the IOPS for smaller DB, then he would not 
need 8 SSDs to beat 6 platters, even one or two could be enough. (Just one 
Intel X25E does 35K IOPS random 4K read.) I'm not sure how much doubling the 
RAM would help with operations such as expiration, DB backup etc. compared to 
nice SSD setup.

I'm wondering why so little discussion here on using solid state devices for 
TSM databases? Some of you must be doing it, right?

On Nov 17, 2010, at 7:50 PM, Remco Post wrote:


SSD t

LanFree very low performance

2010-11-22 Thread peppix
Hi all,
I'm trying to backup an oracle db (5TB) by lanfree, but the transfer speed is 
70 mb/s (over 5 drives)!! The db is offiline so I use the incremental backup.

The client configuration (v.5.5.2.2):

SErvername  X
   COMMMethodTCPip
   TCPPort   1500
   TCPServeraddress  XXX
   DISKBUFFSIZE   1023
   TCPWindowsize  1024
   TCPBUffsize   512
   passwordaccess generate
   nodename XXX
   RESOURCEUTILIZATION   10
   TCPNOdelay  YES
   TXNByteLimit 2097152
   LargeCommBuffer   NO
   CommRestartDuration 5
   CommRestartInterval  15
   schedmode   prompted
   managedservices   schedule webclient
   ERRORlogname   /usr/tivoli/tsm/client/ba/bin/dsmerror.log
   ERRORlogrete7 D
   SCHEDlogname  /usr/tivoli/tsm/client/ba/bin/dsmsched.log
   schedlogrete 7 D
   inclexcl/usr/tivoli/tsm/client/ba/bin/inclexcl.lst
   enablelanfreeyes
   LANFREECommmethod TCPIP
   LANFREETCPPort1500


Server configuration (v.5.5.4.0):
   COMMmethod TCPIP
   TCPPort1500
   TCPWindowsize  1024
   TCPBufsize32
   TCPNODELAY YES
   COMMmethod HTTP
   HTTPPort  1580
   COMMmethod SHaredmem
   SHMPort   1510
   IDLETimeout  2880
   LANGuage en_US
   DATEformat   2
   TIMEformat   1
   NUMberformat5
   EXPInterval   0
   MIRRORREAD LOG   NORMAL
   MIRRORREAD DB NORMAL
   MIRRORWRITE LOG PARALLEL
   MIRRORWRITE DB   SEQUENTIAL
   VOLUMEHistory  /tsmdblog/config/volhistory.txt
   VOLUMEHistory  /usr/tivoli/tsm/server/bin/volhistory.txt
   DEVCONFig   /tsmdblog/config/devconfig.txt
   DEVCONFig   /usr/tivoli/tsm/server/bin/devconfig.txt
   BUFPoolsize  32768
   LOGPoolsize  4096
   TXNGroupmax   256
   MOVEBatchsize 1000
   MOVESizethresh1024
   tcpnodelay  YES
   DNSLOOKUP NO
   COMMTIMEOUT14400
   MAXSESSIONS 80
   SANDISCOVERYON

Considerations:
- 12 drives installed in the library
- SAN and TSM work good (with the same tsm server in the same SAN I backup a 
lot of SAP by TDP at 300MB/s)
- all server backupped by incremental have the same problem (same configuration 
but different san switch and different hba).

Can you help me please?
thanks,
best regards

+--
|This was sent by barberi.giuse...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


Ang: Re: DB2: SSD vs more RAM

2010-11-22 Thread Daniel Sparrman
Skipping raid/mirroring is probably the worst thing he could do. When the 
database hits that size, you can expect your organisation to want the TSM 
server up at all times. A disk outage would not really meet that requirement.

SSD disks only have a longer lifespan during lots of reads / less writes. In a 
TSM environment, that wouldnt be true, thus not giving the SSD's a longer 
lifespan.

a) Secure your TSM server, it's your lifeline whenever everything else goes 
wrong

b) Go for performance

Never turn those 2 points the other way around.

Regards

Daniel Sparrman

-"ADSM: Dist Stor Manager"  skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Henrik Ahlgren 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 11/22/2010 23:27
Ärende: Re: DB2: SSD vs more RAM

Yep, doing a 200 GB database with high-end and reduntant SLC NAND is definedly 
not cheap (let alone the 2 TB Bill Colwell described in his post). Not that 
bunch of 15K disks and the power to run them is free, either. Cost per IOPS for 
disk is actually terrible.

Just a thought (don't take too seriously!): what if you'd be willing to take 
the risk and forget RAID/mirroring, after all solid state is pretty reliable 
these days. Of course - in addition to perfect DB backup strategy you need 
anyhow - put your transaction logs to different disk: spinning disk is great 
for that (it's more or less sequential I/O).  And maybe even use cheaper MLC 
NAND - if you "short stroke" (google for intel ssd overprovisioning) a 160 GB 
X25-M down to 128 GB, you get three times the endurance, so it should last for 
quite a long time even with DB workload. Of course, like tapes, you have to 
treat NAND media as consumables, and keep an eye on the S.M.A.R.T. media 
wearout indicators.

Yeah, too radical and risky for most. We'll just have to wait for couple of 
years more to finally get rid of rotating rust for random IO usage once and for 
all.

On Nov 22, 2010, at 7:13 PM, Pretorius, Louw  wrote:

> Well as I was specing a new TSM server i thought, why not try for the best 
> performance possible and although the SSD drives drives the server costs up 
> by 50% it wasn't out of the ballpark, therefore I wanted to hear from the 
> community what their ideas were.
> 
> As it stands I have a 100GB DB currently ~50% used but according to IBM TSM 
> 6.2 will require double DB size hence 200GB and since we are expecting a 40% 
> data-growth next year and will be implimenting Dedupe I thought why not see 
> how the price/performance goes on other sites. 
> 
> As db2 is a fully featured DB I thought that the alternative would be to give 
> it more RAM, as it's so much cheaper than SSD's.  And also how I could 
> configure my DB2 to use the extra RAM that I will be throwing at it in any 
> case.
> 
> With the current feedback I will be sticking to 6 x SAS 15K and 24GB RAM...
> 
> Please if there's any other opinions let's hear it, the more opinions the 
> more wisdom...
> 
> Regards
> Louw Pretorius
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
> Henrik Ahlgren
> Sent: 22 November 2010 11:25
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] DB2: SSD vs more RAM
> 
> Or maybe he has a huge amount of DB entries?  If his options are either six 
> SAS 15K or eight SSDs (50GB each), it means his DB is propably in the 
> multi-hundred gigabyte range. If he just needs the IOPS for smaller DB, then 
> he would not need 8 SSDs to beat 6 platters, even one or two could be enough. 
> (Just one Intel X25E does 35K IOPS random 4K read.) I'm not sure how much 
> doubling the RAM would help with operations such as expiration, DB backup 
> etc. compared to nice SSD setup.
> 
> I'm wondering why so little discussion here on using solid state devices for 
> TSM databases? Some of you must be doing it, right?
> 
> On Nov 17, 2010, at 7:50 PM, Remco Post wrote:
> 
>> SSD to me seems overkill if you already have 24 GB of RAM, unless you need 
>> superfast performance and are going to run a very busy TSM server with a 
>> huge amount of concurrent sessions.
>> 
>> --
>> 
>> Gr., Remco
>> 
>> On 17 nov. 2010, at 12:16, "Pretorius, Louw " 
>>  wrote:
>> 
>>> Hi all,
>>> 
>>> I am currently in the process of setting up specifications for our new 
>>> TSM6.2 server.  
>>> 
>>> I started by adding 8 x SSD 50GB disks to hold OS and DB, but because of 
>>> the high costs was wondering if it's possible to rather buy more RAM and 
>>> increase the DB2 cache to speed up the database.
>>> 
>>> Currently I have RAM set at 24GB but its way cheaper doubling the RAM 
>>> than to buy 8 x SSD's Currently I have 8 x SSD vs 6 x SAS 15K
> 
> 
> --
> Henrik Ahlgren
> Seestieto
> +358-50-3866200


-- 
Henrik Ahlgren
Seestieto
+358-50-3866200

Re: Seemingly stupid dsmadmc behavior

2010-11-22 Thread Remco Post
You are right, there are some backward restrictions in tsm... Most notable, the 
users stanza in dsm.sys on unix, and then every user being able the set DSM_DIR 
and create his own dsm.sys the way he likes it That is btw the second way 
to avoid having anything to do with the logfiles as defined in the global 
dsm.sys.


On 22 nov 2010, at 22:22, Steve Harris wrote:

> Hi Remco
> 
> Thanks for your suggestion, I hadn't thought of that and it does work
> nicely
> 
> However, the fact that it does work merely underlines my point that the
> original restriction on overriding the log location with the DSM_LOG
> environment variable is essentially pointless and just makes things
> unnecessarily complex for no good reason.
> 
> Regards
> 
> Steve.
> 
> 
> On Wed, 17 Nov 2010 23:24:23 +0100, Remco Post  wrote:
>> How about: alias dsmadmc=dsmadmc -errorlogname=~/dsmerror.log ?
>> 
>> Wouldn't that do the trick?
>> 
>> --
>> 
>> Gr., Remco
>> 
>> On 17 nov. 2010, at 22:46, Steve Harris  wrote:
>> 
>>> Time for a bit of a rant.
>>> 
>>> 
>>> I have a new 5.5 server on Solaris.  For reasons that I understand even
>>> if
>>> it does make my life more difficult, I am not permitted to have root
>>> access
>>> on this box, and the Solaris guys have determined that they want the
> TSM
>>> client log files on /var/adm/log/tsm
>>> 
>>> So dsm.sys has
>>> 
>>>  errorlogname   /var/adm/log/tsm/dsmerror.log
>>>  errorlogretention  14,d
>>> 
>>> This is fine for the root user, but when I log in to use dsmamdc  I get
>>> 
>>> ANS2036W  Pruning functions cannot open one of the Tivoli Storage
> Manager
>>> prune files: /var/adm/log/tsm/dsmprune.log. errno = 13, Permission
> denied
>>> 
>>> No big deal, I know this is not a problem and happily ignore it.
>>> 
>>> However, I have some scripts that my operations people are going to
> use,
>>> and these invoke dsmadmc multiple times per script to do whatever is
>>> needed.  Each time the ANS2036W message appears.  yes I'm aware that
>>> there
>>> is a work around for this change errorlogretention to S, run dsmadmc to
>>> create the dsmerror.pru file and then change permissions.  Someone with
>>> root access needs to do that.
>>> 
>>> Much simpler would be to just allow the DSM_LOG environment variable to
>>> override the dsm.sys specification, but it cannot.
>>> 
>>> There is NO reason not to allow this.
>>> 
>>> Given that I can set DSM_DIR and DSM_CFG to point to any arbitrary file
>>> and also symlink to the message catalog file  I can override the system
>>> options files however I please.  Its just damned annoying to have to.
>>> 
>>> 
>>> TSM being complex, difficult and obscure keeps me in work, but its not
>>> doing anything for this wonderful product in the marketplace.
>>> 
>>> Steven Harris
>>> TSM Admin,
>>> Paraparaumu, NZ
>>> 
>>> 
>>> 
>>> 

-- 
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: DB2: SSD vs more RAM

2010-11-22 Thread Henrik Ahlgren
Yep, doing a 200 GB database with high-end and reduntant SLC NAND is definedly 
not cheap (let alone the 2 TB Bill Colwell described in his post). Not that 
bunch of 15K disks and the power to run them is free, either. Cost per IOPS for 
disk is actually terrible.

Just a thought (don't take too seriously!): what if you'd be willing to take 
the risk and forget RAID/mirroring, after all solid state is pretty reliable 
these days. Of course - in addition to perfect DB backup strategy you need 
anyhow - put your transaction logs to different disk: spinning disk is great 
for that (it's more or less sequential I/O).  And maybe even use cheaper MLC 
NAND - if you "short stroke" (google for intel ssd overprovisioning) a 160 GB 
X25-M down to 128 GB, you get three times the endurance, so it should last for 
quite a long time even with DB workload. Of course, like tapes, you have to 
treat NAND media as consumables, and keep an eye on the S.M.A.R.T. media 
wearout indicators.

Yeah, too radical and risky for most. We'll just have to wait for couple of 
years more to finally get rid of rotating rust for random IO usage once and for 
all.

On Nov 22, 2010, at 7:13 PM, Pretorius, Louw  wrote:

> Well as I was specing a new TSM server i thought, why not try for the best 
> performance possible and although the SSD drives drives the server costs up 
> by 50% it wasn't out of the ballpark, therefore I wanted to hear from the 
> community what their ideas were.
> 
> As it stands I have a 100GB DB currently ~50% used but according to IBM TSM 
> 6.2 will require double DB size hence 200GB and since we are expecting a 40% 
> data-growth next year and will be implimenting Dedupe I thought why not see 
> how the price/performance goes on other sites. 
> 
> As db2 is a fully featured DB I thought that the alternative would be to give 
> it more RAM, as it's so much cheaper than SSD's.  And also how I could 
> configure my DB2 to use the extra RAM that I will be throwing at it in any 
> case.
> 
> With the current feedback I will be sticking to 6 x SAS 15K and 24GB RAM...
> 
> Please if there's any other opinions let's hear it, the more opinions the 
> more wisdom...
> 
> Regards
> Louw Pretorius
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
> Henrik Ahlgren
> Sent: 22 November 2010 11:25
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] DB2: SSD vs more RAM
> 
> Or maybe he has a huge amount of DB entries?  If his options are either six 
> SAS 15K or eight SSDs (50GB each), it means his DB is propably in the 
> multi-hundred gigabyte range. If he just needs the IOPS for smaller DB, then 
> he would not need 8 SSDs to beat 6 platters, even one or two could be enough. 
> (Just one Intel X25E does 35K IOPS random 4K read.) I'm not sure how much 
> doubling the RAM would help with operations such as expiration, DB backup 
> etc. compared to nice SSD setup.
> 
> I'm wondering why so little discussion here on using solid state devices for 
> TSM databases? Some of you must be doing it, right?
> 
> On Nov 17, 2010, at 7:50 PM, Remco Post wrote:
> 
>> SSD to me seems overkill if you already have 24 GB of RAM, unless you need 
>> superfast performance and are going to run a very busy TSM server with a 
>> huge amount of concurrent sessions.
>> 
>> --
>> 
>> Gr., Remco
>> 
>> On 17 nov. 2010, at 12:16, "Pretorius, Louw " 
>>  wrote:
>> 
>>> Hi all,
>>> 
>>> I am currently in the process of setting up specifications for our new 
>>> TSM6.2 server.  
>>> 
>>> I started by adding 8 x SSD 50GB disks to hold OS and DB, but because of 
>>> the high costs was wondering if it's possible to rather buy more RAM and 
>>> increase the DB2 cache to speed up the database.
>>> 
>>> Currently I have RAM set at 24GB but its way cheaper doubling the RAM 
>>> than to buy 8 x SSD's Currently I have 8 x SSD vs 6 x SAS 15K
> 
> 
> --
> Henrik Ahlgren
> Seestieto
> +358-50-3866200


-- 
Henrik Ahlgren
Seestieto
+358-50-3866200


Ang: Re: Seemingly stupid dsmadmc behavior

2010-11-22 Thread Daniel Sparrman
Or he could just ask one of the root admins to do a chown user:group 
/var/adm/log/tsm/dsmprune.log (or actually, all the files under 
/var/adm/log/tsm/) where user is his current user and group the primary group 
of that user. Doesnt look like it's the error log which is the issue. 



-"ADSM: Dist Stor Manager"  skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Steve Harris 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 11/22/2010 22:22
Ärende: Re: Seemingly stupid dsmadmc behavior

Hi Remco

Thanks for your suggestion, I hadn't thought of that and it does work
nicely

However, the fact that it does work merely underlines my point that the
original restriction on overriding the log location with the DSM_LOG
environment variable is essentially pointless and just makes things
unnecessarily complex for no good reason.

Regards

Steve.
 

On Wed, 17 Nov 2010 23:24:23 +0100, Remco Post  wrote:
> How about: alias dsmadmc=dsmadmc -errorlogname=~/dsmerror.log ?
> 
> Wouldn't that do the trick?
> 
> --
> 
> Gr., Remco
> 
> On 17 nov. 2010, at 22:46, Steve Harris  wrote:
> 
>> Time for a bit of a rant.
>>
>> 
>> I have a new 5.5 server on Solaris.  For reasons that I understand even
>> if
>> it does make my life more difficult, I am not permitted to have root
>> access
>> on this box, and the Solaris guys have determined that they want the
TSM
>> client log files on /var/adm/log/tsm
>>
>> So dsm.sys has
>>
>>   errorlogname   /var/adm/log/tsm/dsmerror.log
>>   errorlogretention  14,d
>>
>> This is fine for the root user, but when I log in to use dsmamdc  I get
>>
>> ANS2036W  Pruning functions cannot open one of the Tivoli Storage
Manager
>> prune files: /var/adm/log/tsm/dsmprune.log. errno = 13, Permission
denied
>>
>> No big deal, I know this is not a problem and happily ignore it.
>>
>> However, I have some scripts that my operations people are going to
use,
>> and these invoke dsmadmc multiple times per script to do whatever is
>> needed.  Each time the ANS2036W message appears.  yes I'm aware that
>> there
>> is a work around for this change errorlogretention to S, run dsmadmc to
>> create the dsmerror.pru file and then change permissions.  Someone with
>> root access needs to do that.
>>
>> Much simpler would be to just allow the DSM_LOG environment variable to
>> override the dsm.sys specification, but it cannot.
>>
>> There is NO reason not to allow this.
>>
>> Given that I can set DSM_DIR and DSM_CFG to point to any arbitrary file
>> and also symlink to the message catalog file  I can override the system
>> options files however I please.  Its just damned annoying to have to.
>> 
>>
>> TSM being complex, difficult and obscure keeps me in work, but its not
>> doing anything for this wonderful product in the marketplace.
>>
>> Steven Harris
>> TSM Admin,
>> Paraparaumu, NZ
>>
>>
>>
>>

Re: Seemingly stupid dsmadmc behavior

2010-11-22 Thread Steve Harris
Hi Remco

Thanks for your suggestion, I hadn't thought of that and it does work
nicely

However, the fact that it does work merely underlines my point that the
original restriction on overriding the log location with the DSM_LOG
environment variable is essentially pointless and just makes things
unnecessarily complex for no good reason.

Regards

Steve.
 

On Wed, 17 Nov 2010 23:24:23 +0100, Remco Post  wrote:
> How about: alias dsmadmc=dsmadmc -errorlogname=~/dsmerror.log ?
> 
> Wouldn't that do the trick?
> 
> --
> 
> Gr., Remco
> 
> On 17 nov. 2010, at 22:46, Steve Harris  wrote:
> 
>> Time for a bit of a rant.
>>
>> 
>> I have a new 5.5 server on Solaris.  For reasons that I understand even
>> if
>> it does make my life more difficult, I am not permitted to have root
>> access
>> on this box, and the Solaris guys have determined that they want the
TSM
>> client log files on /var/adm/log/tsm
>>
>> So dsm.sys has
>>
>>   errorlogname   /var/adm/log/tsm/dsmerror.log
>>   errorlogretention  14,d
>>
>> This is fine for the root user, but when I log in to use dsmamdc  I get
>>
>> ANS2036W  Pruning functions cannot open one of the Tivoli Storage
Manager
>> prune files: /var/adm/log/tsm/dsmprune.log. errno = 13, Permission
denied
>>
>> No big deal, I know this is not a problem and happily ignore it.
>>
>> However, I have some scripts that my operations people are going to
use,
>> and these invoke dsmadmc multiple times per script to do whatever is
>> needed.  Each time the ANS2036W message appears.  yes I'm aware that
>> there
>> is a work around for this change errorlogretention to S, run dsmadmc to
>> create the dsmerror.pru file and then change permissions.  Someone with
>> root access needs to do that.
>>
>> Much simpler would be to just allow the DSM_LOG environment variable to
>> override the dsm.sys specification, but it cannot.
>>
>> There is NO reason not to allow this.
>>
>> Given that I can set DSM_DIR and DSM_CFG to point to any arbitrary file
>> and also symlink to the message catalog file  I can override the system
>> options files however I please.  Its just damned annoying to have to.
>> 
>>
>> TSM being complex, difficult and obscure keeps me in work, but its not
>> doing anything for this wonderful product in the marketplace.
>>
>> Steven Harris
>> TSM Admin,
>> Paraparaumu, NZ
>>
>>
>>
>>


Re: DB2: SSD vs more RAM

2010-11-22 Thread Colwell, William F.
Hi Henrik,

I have 2 TSM (6.1.4.2) instances on one server.  One instance db size
(the size of the full db backup) is
558 GB, the other is 1,448 GB.

The server (IBM x3850 m2, running RHEL 5.5) started with 16 GB of ram, I
bumped it to 40 GB and then max'ed it out
with 128 GB.  I can't say I did a though performance analysis because it
was such a cheap thing to do.
When there are 2 or more instances on a server you need to use the
DBMEMPERCENT parameter in
dsmserv.opt to keep the instances from fighting for the memory and leave
some for the OS.  I have 
each set to 45%.

I started out with both databases on a Netapp, sharing 1 aggregate.  The
aggregate was 27 300 GB 15k sas
disks.  I wasn't satisfied with the performance and the usage was up to
70% so I bought a 
Nexsan "SASbeast" unit with 2 raid 10 arrays.  12 600GB 15k disks for
the smaller db and 16 disks for the 
larger DB.  I just finished moving the databases on to the arrays.  The
speed of the db backups increased dramatically.
Here is an sql query showing the last 6 dbbackups -

Activity   Start Time   End Time Elapsed (hh:mm:ss)
Gigs
--   ---
--
FULL_DBBACKUP  10-24-13.00  10-24-23.10  10:10:15
1390.90
FULL_DBBACKUP  10-31-15.52  11-01-00.46  08:54:39
1399.24
FULL_DBBACKUP  11-07-13.00  11-07-23.12  10:11:46
1432.42
FULL_DBBACKUP  11-14-13.00  11-14-21.22  08:21:49
1436.85
FULL_DBBACKUP  11-20-07.04  11-20-13.46  06:42:09
1442.77
FULL_DBBACKUP  11-21-15.00  11-21-17.35  02:35:54
1448.55

Line 4 is the last 'normal' backup from the Netapp (other things going
on during the backup).
Line 5 is the 'special' backup just before the restore (nothing else
going on)
Line 6 is the first 'normal' backup from the raid 10 array.  Much
faster.

Since the topic is about SSD or RAM, I can say I never considered SSD.
I expected it would be too expensive
for DB's this size.  If you are planning on doing dedup, expect the db
to grow very big very fast.

Thanks,

Bill Colwell
Draper Lab

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Henrik Ahlgren
Sent: Monday, November 22, 2010 4:25 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: DB2: SSD vs more RAM

Or maybe he has a huge amount of DB entries?  If his options are either
six SAS 15K or eight SSDs (50GB each), it means his DB is propably in
the multi-hundred gigabyte range. If he just needs the IOPS for smaller
DB, then he would not need 8 SSDs to beat 6 platters, even one or two
could be enough. (Just one Intel X25E does 35K IOPS random 4K read.) I'm
not sure how much doubling the RAM would help with operations such as
expiration, DB backup etc. compared to nice SSD setup.

I'm wondering why so little discussion here on using solid state devices
for TSM databases? Some of you must be doing it, right?

On Nov 17, 2010, at 7:50 PM, Remco Post wrote:

> SSD to me seems overkill if you already have 24 GB of RAM, unless you
need superfast performance and are going to run a very busy TSM server
with a huge amount of concurrent sessions.
> 
> -- 
> 
> Gr., Remco
> 
> On 17 nov. 2010, at 12:16, "Pretorius, Louw "
 wrote:
> 
>> Hi all,
>> 
>> I am currently in the process of setting up specifications for our
new TSM6.2 server.  
>> 
>> I started by adding 8 x SSD 50GB disks to hold OS and DB, but because
of the high costs was wondering if it's possible to rather buy more RAM
and increase the DB2 cache to speed up the database.
>> 
>> Currently I have RAM set at 24GB but its way cheaper doubling the RAM
than to buy 8 x SSD's
>> Currently I have 8 x SSD vs 6 x SAS 15K 


-- 
Henrik Ahlgren
Seestieto
+358-50-3866200


TSM Client support for VxFS on SLES 11.

2010-11-22 Thread robert_clark

The text describing client support for Linux x86_64 appears to have dropped the 
proviso that VxFS is not supported on SLES 11.

Has anyone heard anything about whether this implies support of VxFS on SLES 11?

http://www-01.ibm.com/support/docview.wss?rs=663&context=SSGSG7&uid=swg21052223&loc=en_US&cs=utf-8&lang=en


Re: Extractdb cannot open en_US or AMENG

2010-11-22 Thread Keith Arbogast
Correction: the INSERTDB step took 10 hours, 8 minutes.  The EXTRACTDB step 
took 2 hours 8 minutes.

Liberal egg on face,

Keith


Re: DB2: SSD vs more RAM

2010-11-22 Thread Pretorius, Louw
Hi Henrik,

Well as I was specing a new TSM server i thought, why not try for the best 
performance possible and although the SSD drives drives the server costs up by 
50% it wasn't out of the ballpark, therefore I wanted to hear from the 
community what their ideas were.

As it stands I have a 100GB DB currently ~50% used but according to IBM TSM 6.2 
will require double DB size hence 200GB and since we are expecting a 40% 
data-growth next year and will be implimenting Dedupe I thought why not see how 
the price/performance goes on other sites. 

As db2 is a fully featured DB I thought that the alternative would be to give 
it more RAM, as it's so much cheaper than SSD's.  And also how I could 
configure my DB2 to use the extra RAM that I will be throwing at it in any case.

With the current feedback I will be sticking to 6 x SAS 15K and 24GB RAM...

Please if there's any other opinions let's hear it, the more opinions the more 
wisdom...

Regards
Louw Pretorius

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Henrik 
Ahlgren
Sent: 22 November 2010 11:25
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DB2: SSD vs more RAM

Or maybe he has a huge amount of DB entries?  If his options are either six SAS 
15K or eight SSDs (50GB each), it means his DB is propably in the multi-hundred 
gigabyte range. If he just needs the IOPS for smaller DB, then he would not 
need 8 SSDs to beat 6 platters, even one or two could be enough. (Just one 
Intel X25E does 35K IOPS random 4K read.) I'm not sure how much doubling the 
RAM would help with operations such as expiration, DB backup etc. compared to 
nice SSD setup.

I'm wondering why so little discussion here on using solid state devices for 
TSM databases? Some of you must be doing it, right?

On Nov 17, 2010, at 7:50 PM, Remco Post wrote:

> SSD to me seems overkill if you already have 24 GB of RAM, unless you need 
> superfast performance and are going to run a very busy TSM server with a huge 
> amount of concurrent sessions.
> 
> --
> 
> Gr., Remco
> 
> On 17 nov. 2010, at 12:16, "Pretorius, Louw " 
>  wrote:
> 
>> Hi all,
>> 
>> I am currently in the process of setting up specifications for our new 
>> TSM6.2 server.  
>> 
>> I started by adding 8 x SSD 50GB disks to hold OS and DB, but because of the 
>> high costs was wondering if it's possible to rather buy more RAM and 
>> increase the DB2 cache to speed up the database.
>> 
>> Currently I have RAM set at 24GB but its way cheaper doubling the RAM 
>> than to buy 8 x SSD's Currently I have 8 x SSD vs 6 x SAS 15K


--
Henrik Ahlgren
Seestieto
+358-50-3866200


TSM for DB

2010-11-22 Thread David E Ehresman
Is there a TSM for DB v6 client?  The latest I can find is TSM for DB
v5.5

David


Re: DB2: SSD vs more RAM

2010-11-22 Thread Henrik Ahlgren
Or maybe he has a huge amount of DB entries?  If his options are either six SAS 
15K or eight SSDs (50GB each), it means his DB is propably in the multi-hundred 
gigabyte range. If he just needs the IOPS for smaller DB, then he would not 
need 8 SSDs to beat 6 platters, even one or two could be enough. (Just one 
Intel X25E does 35K IOPS random 4K read.) I'm not sure how much doubling the 
RAM would help with operations such as expiration, DB backup etc. compared to 
nice SSD setup.

I'm wondering why so little discussion here on using solid state devices for 
TSM databases? Some of you must be doing it, right?

On Nov 17, 2010, at 7:50 PM, Remco Post wrote:

> SSD to me seems overkill if you already have 24 GB of RAM, unless you need 
> superfast performance and are going to run a very busy TSM server with a huge 
> amount of concurrent sessions.
> 
> -- 
> 
> Gr., Remco
> 
> On 17 nov. 2010, at 12:16, "Pretorius, Louw " 
>  wrote:
> 
>> Hi all,
>> 
>> I am currently in the process of setting up specifications for our new 
>> TSM6.2 server.  
>> 
>> I started by adding 8 x SSD 50GB disks to hold OS and DB, but because of the 
>> high costs was wondering if it's possible to rather buy more RAM and 
>> increase the DB2 cache to speed up the database.
>> 
>> Currently I have RAM set at 24GB but its way cheaper doubling the RAM than 
>> to buy 8 x SSD's
>> Currently I have 8 x SSD vs 6 x SAS 15K 


-- 
Henrik Ahlgren
Seestieto
+358-50-3866200