SV: Exchange 2010 and Error code ACN5073E and ACN6068I

2013-01-16 Thread Christian Svensson
Hi all,
I have a strange situations I try to understand.
We have 2 x Microsoft Exchange Server 2010 DAG clusters and we have the same 
issue on both of them.
We are running TSM Client Version 6.4.0.0 and TSM for Exchange 6.4.0.0 with TSM 
Server 6.3.3.0 because of we want to get hold of the new DAG Node functionality.

When we try to backup our DAG databases (Active or Passive) we got a strange 
issue.
In the new TDP MMC GUI (FCM) does TSM tell us error code ACN5073E None of the 
storage groups entered are in a state to be backed up.
But tdpexc.log tell us error code ACN6068I DAGDB is being backed up by a 
different server - skipping.

When I verify in TSM no one are backing up any databases at the moment so TSM 
are totally quite.
I go in to each DAG Server and verify that TSM/FCM or any other products are 
active, and I also run DISKSHADOW to verify so VSS are not holding any 
snapshots.

I have also try to unmounts the databases and remount them with no success.

I have also verify that all VSSADMIN LIST WRITERS are active that explains in 
following MS link.
http://msdn.microsoft.com/en-us/library/bb204080%28EXCHG.140%29.aspx

When I run a trace on the TDPEXECC command then does it match following IBM 
link but accept that our servername are in upper case instead of lower case.
http://www-01.ibm.com/support/docview.wss?uid=swg1IC86345

But I look in the trace file and it also complain on a registry key
registry.cpp( 193): error opening HKEY_LOCAL_MACHINE or 
HKEY_CLASSES_ROOT with a pathkey of 
SYSTEM\CurrentControlSet\Services\MSExchangeKMS.
pssrvuts.cpp( 473): Enter CIfcService::isServiceRunning()
pssrvuts.cpp( 431): Enter CIfcService::initializeData()
pssrvuts.cpp( 457): Exit CIfcService::initializeData()
pssrvuts.cpp( 218): Enter CIfcService::openService()
pssrvuts.cpp( 150): Enter CIfcService::openScm()
pssrvuts.cpp( 173): Exit CIfcService::openScm()
pssrvuts.cpp( 244): Exit CIfcService::openService()
..\..\..\..\common\nls\amsgrtrv.cpp(3332): Searching for message number: 5304
..\..\..\..\common\nls\amsgrtrv.cpp(1651): ReadIndex: indexOffset = 10670
..\..\..\..\common\nls\amsgrtrv.cpp(1677): ReadIndex: msgIndex = 1390
..\..\..\..\common\nls\amsgrtrv.cpp(4931): ReadMsg: recOffset = 88896 (15B40)
..\..\..\..\common\nls\amsgrtrv.cpp(4973): ReadMsg: Msg hdr = 3930 : 3039
..\..\..\..\common\nls\amsgrtrv.cpp(4989): ReadMsg: Severity = 6, Length = 57, 
resplen= 0
..\..\..\..\common\nls\amsgrtrv.cpp(5038): ReadMsg: prefixLen = 9
..\..\..\..\common\nls\amsgrtrv.cpp(3390): Deleting message: 153 from the cache.
..\..\..\..\common\nls\amsgrtrv.cpp(3398): Adding message: 5304 to the cache.
..\..\..\..\common\nls\amsgrtrv.cpp(2718): return from nlOrderInsert (char), 
msgLen 64:
agtmem.cpp  ( 398): AGENT DSMEM(-) Addr 1DE93170 File 
..\..\..\common\winnt\pssrvuts.cpp Line 496
pssrvuts.cpp( 187): Enter CIfcService::closeService()
pssrvuts.cpp( 201): Exit CIfcService::closeService()
pssrvuts.cpp( 538): Exit CIfcService::isServiceRunning()
psexcut2.cpp(1538): scm.isServiceRunning() returned message: ACN5304E 
Unable to open service to determine if running or not.

registry.cpp( 155): Enter getSzValue()
registry.cpp( 193): error opening HKEY_LOCAL_MACHINE or 
HKEY_CLASSES_ROOT with a pathkey of 
SYSTEM\CurrentControlSet\Services\MSExchangeSRS.
pssrvuts.cpp( 473): Enter CIfcService::isServiceRunning()
pssrvuts.cpp( 431): Enter CIfcService::initializeData()
pssrvuts.cpp( 457): Exit CIfcService::initializeData()
pssrvuts.cpp( 218): Enter CIfcService::openService()
pssrvuts.cpp( 150): Enter CIfcService::openScm()
pssrvuts.cpp( 173): Exit CIfcService::openScm()
pssrvuts.cpp( 244): Exit CIfcService::openService()
..\..\..\..\common\nls\amsgrtrv.cpp(3332): Searching for message number: 5304
..\..\..\..\common\nls\amsgrtrv.cpp(3347): Found message: 5304 in cache.
..\..\..\..\common\nls\amsgrtrv.cpp(2718): return from nlOrderInsert (char), 
msgLen 64:
agtmem.cpp  ( 398): AGENT DSMEM(-) Addr 1DE93170 File 
..\..\..\common\winnt\pssrvuts.cpp Line 496
pssrvuts.cpp( 187): Enter CIfcService::closeService()
pssrvuts.cpp( 201): Exit CIfcService::closeService()
pssrvuts.cpp( 538): Exit CIfcService::isServiceRunning()
psexcut2.cpp(1574): scm.isServiceRunning() returned message: ACN5304E 
Unable to open service to determine if running or not

Any suggestions how to continue or ideas why I can't backup my Exchange DAG 
databases anymore?

Thanks and see you at Pulse 2013
Christian Svensson

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.semailto:christian.svens...@cristie.se
Säkra återläsningar...


Re: SV: Exchange 2010 and Error code ACN5073E and ACN6068I

2013-01-16 Thread Del Hoobler
Hi Christian,

One thing to check is the Exchange database BackupInProgress flag.
Data Protection for Exchange utilizes this flag to know whether
a backup is already in progress. I know you said there wasn't,
but if the Exchange Server thinks there is, it won't allow
another to start.

To check this, run this cmdlet:

Get-MailboxDatabase -Status

check the BackupInProgress flag.

BTW... the trace you included is normal and not helpful
for this problem. You would need to include the part of the trace
that runs the Get-MailboxDatabase cmdlet.


Thanks,

Del



ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 01/16/2013
03:06:33 AM:

 From: Christian Svensson christian.svens...@cristie.se
 To: ADSM-L@vm.marist.edu,
 Date: 01/16/2013 03:15 AM
 Subject: SV: Exchange 2010 and Error code ACN5073E and ACN6068I
 Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu

 Hi all,
 I have a strange situations I try to understand.
 We have 2 x Microsoft Exchange Server 2010 DAG clusters and we have
 the same issue on both of them.
 We are running TSM Client Version 6.4.0.0 and TSM for Exchange 6.4.
 0.0 with TSM Server 6.3.3.0 because of we want to get hold of the
 new DAG Node functionality.

 When we try to backup our DAG databases (Active or Passive) we got a
 strange issue.
 In the new TDP MMC GUI (FCM) does TSM tell us error code ACN5073E
 None of the storage groups entered are in a state to be backed up.
 But tdpexc.log tell us error code ACN6068I DAGDB is being backed
 up by a different server - skipping.

 When I verify in TSM no one are backing up any databases at the
 moment so TSM are totally quite.
 I go in to each DAG Server and verify that TSM/FCM or any other
 products are active, and I also run DISKSHADOW to verify so VSS are
 not holding any snapshots.

 I have also try to unmounts the databases and remount them with no
success.

 I have also verify that all VSSADMIN LIST WRITERS are active that
 explains in following MS link.
 http://msdn.microsoft.com/en-us/library/bb204080%28EXCHG.140%29.aspx

 When I run a trace on the TDPEXECC command then does it match
 following IBM link but accept that our servername are in upper case
 instead of lower case.
 http://www-01.ibm.com/support/docview.wss?uid=swg1IC86345

 But I look in the trace file and it also complain on a registry key
 registry.cpp( 193): error opening HKEY_LOCAL_MACHINE or
 HKEY_CLASSES_ROOT with a pathkey of SYSTEM\CurrentControlSet
 \Services\MSExchangeKMS.
 pssrvuts.cpp( 473): Enter CIfcService::isServiceRunning()
 pssrvuts.cpp( 431): Enter CIfcService::initializeData()
 pssrvuts.cpp( 457): Exit CIfcService::initializeData()
 pssrvuts.cpp( 218): Enter CIfcService::openService()
 pssrvuts.cpp( 150): Enter CIfcService::openScm()
 pssrvuts.cpp( 173): Exit CIfcService::openScm()
 pssrvuts.cpp( 244): Exit CIfcService::openService()
 ..\..\..\..\common\nls\amsgrtrv.cpp(3332): Searching for message number:
5304
 ..\..\..\..\common\nls\amsgrtrv.cpp(1651): ReadIndex: indexOffset =
10670
 ..\..\..\..\common\nls\amsgrtrv.cpp(1677): ReadIndex: msgIndex = 1390
 ..\..\..\..\common\nls\amsgrtrv.cpp(4931): ReadMsg: recOffset = 88896
(15B40)
 ..\..\..\..\common\nls\amsgrtrv.cpp(4973): ReadMsg: Msg hdr = 3930 :
3039
 ..\..\..\..\common\nls\amsgrtrv.cpp(4989): ReadMsg: Severity = 6,
 Length = 57, resplen= 0
 ..\..\..\..\common\nls\amsgrtrv.cpp(5038): ReadMsg: prefixLen = 9
 ..\..\..\..\common\nls\amsgrtrv.cpp(3390): Deleting message: 153
 from the cache.
 ..\..\..\..\common\nls\amsgrtrv.cpp(3398): Adding message: 5304 to the
cache.
 ..\..\..\..\common\nls\amsgrtrv.cpp(2718): return from nlOrderInsert
 (char), msgLen 64:
 agtmem.cpp  ( 398): AGENT DSMEM(-) Addr 1DE93170
 File ..\..\..\common\winnt\pssrvuts.cpp Line 496
 pssrvuts.cpp( 187): Enter CIfcService::closeService()
 pssrvuts.cpp( 201): Exit CIfcService::closeService()
 pssrvuts.cpp( 538): Exit CIfcService::isServiceRunning()
 psexcut2.cpp(1538): scm.isServiceRunning() returned message:
 ACN5304E Unable to open service to determine if running or not.

 registry.cpp( 155): Enter getSzValue()
 registry.cpp( 193): error opening HKEY_LOCAL_MACHINE or
 HKEY_CLASSES_ROOT with a pathkey of SYSTEM\CurrentControlSet
 \Services\MSExchangeSRS.
 pssrvuts.cpp( 473): Enter CIfcService::isServiceRunning()
 pssrvuts.cpp( 431): Enter CIfcService::initializeData()
 pssrvuts.cpp( 457): Exit CIfcService::initializeData()
 pssrvuts.cpp( 218): Enter CIfcService::openService()
 pssrvuts.cpp( 150): Enter CIfcService::openScm()
 pssrvuts.cpp( 173): Exit CIfcService::openScm()
 pssrvuts.cpp( 244): Exit CIfcService::openService()
 ..\..\..\..\common\nls\amsgrtrv.cpp(3332): Searching for message number:
5304
 ..\..\..\..\common\nls\amsgrtrv.cpp(3347): Found message: 5304 in cache.
 

Re: Odd server activity attached to no session or process, and deleting filespaces from very large nodes...

2013-01-16 Thread Allen S. Rout
On 01/15/2013 05:56 PM, Alex Paschal wrote:
 Are those 40mil spread across several filespaces?  I've seen situations
 on TSM-classic (i.e. pre-6) where doing smaller workloads, like deleting
 one filespace at a time, worked better than deleting the whole node.
 Would that be practical in this case?

Thanks, I experimented after your suggestion:  objects deleted wasn't
particularly different doing the whole node vs. one FS at a time.

I've addressed the prompt issue; no more thrashing on a quiesced
instance.  It'll be interesting to see if it happens again.  At least
I'll be sensitized; I don't really have a clear idea how long that has
been getting in the way of normal operations.

- Allen S. Rout


Re: TSM 1st full backup of remote low-bandwidth nodes

2013-01-16 Thread Bent Christensen
Andy,

I do not totally agree with you here.

The main issue for us is to get all 107 remote sites converted to TSM 
reasonably fast to save maintenance and service fees on the existing backup 
solutions. With the laptop server solution we predict the turn-around time for 
each laptop to be around 2 weeks, which includes sending the laptop to the 
remote site, back up all data, send the laptop back to the backup center, 
export the node. With say 10 laptops this will take at least 6 months. We could 
buy more laptops but we cannot charge the expenses to the remote sites, and we 
are stuck with the laptops afterwards ...

Disaster restores is a very different ball game. Costs will not be a big issue 
and we have approved plans for recovering any remote site within 48 hours, 
which for a few sites includes chartering an aircraft to transport hardware and 
a technician.

 - Bent



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Huebner, Andy
Sent: Tuesday, January 15, 2013 5:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

You should use the same method to seed the first backup as you plan to use to 
restore the data.
When you look at it that way a laptop and big external drive is not that 
expensive.


Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent 
Christensen
Sent: Tuesday, January 15, 2013 9:37 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

Hi,

We are starting up a backup consolidation project where we are going to 
implement TSM 6.3 clients in all our 100+ remote sites and having them back up 
over the WAN to a few well-placed TSM backup datacenters.

We have been through similar projects with selected sites a few times before, 
but this time the sites are larger and the bandwidth/latency worse, so there is 
little room for configuration mishaps ;-)

One question always pops up early in the process: How are we going to do the 
first full TSM backup of the remote site nodes? 
So far we have tried:

 - copy data from the new node (include all attributes and permissions) to 
USB-disks, mount those on a TSM server (as drive X) and do a 'dsmc incr 
\\newnode\z$ -snapshotroot=X:\newnode_zdrive -asnodename=newnode'. This works 
OK and only requires a bunch of cheap high capacity USB disks, but our 
experience is that when we afterwards do the first incremental backup of the 
new node then 20-40 % of the files get backed up again - and we can't figure 
out why.

- build a temp TSM laptop server, send it to the remote site, direct first full 
backup to this server, send it back to the backup datacenter and export the 
node(s). Nice and easy, but requires a lot of expensive laptops (and USB disks, 
the remote sites typically contain 2 to 10 TB of file data) to finish the 
project in a reasonable time frame.

So how are you guys doing the first full backup of a remote node when using the 
WAN is not an option?

 - Bent


Re: TDP Exchange 2010 scheduled backups fail rc=1819

2013-01-16 Thread Bill Boyer
Just on a whim today I had one of the other admins logon to the server and
remove the TSM services, re-add them and change the logon-as for the TSM
Scheduler service. And now it's working.

The first time I logged on as the user that the backup was going to logon as
and added the services. Don't know what the Windows difference was in adding
the services by another user from the one configured for backups. But now
it's working.

That was the only thing I could think of that was different from my other
TDP Exchange setups on our servers. I did the TSM install/configuration as
my userid (domain admin) and configured the service with the special
userid with the appropriate Exchange rights.

Bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Del
Hoobler
Sent: Tuesday, January 15, 2013 7:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TDP Exchange 2010 scheduled backups fail rc=1819

Hi Bill,

If you aren't making any progress, you should open a PMR.
If the support team gets a trace, they can tell you what is failing.
If you know how and you want to do a little experimenting yourself, you can
turn on tracing for the DP/Exchange client and the DSMAGENT.

Thanks,

Del




ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 01/15/2013
04:12:26 PM:

 From: Bill Boyer bjdbo...@comcast.net
 To: ADSM-L@vm.marist.edu,
 Date: 01/15/2013 06:53 PM
 Subject: Re: TDP Exchange 2010 scheduled backups fail rc=1819 Sent by:
 ADSM: Dist Stor Manager ADSM-L@vm.marist.edu

 The TSM Scheduler service is running under the domain userid in the
 organization Management group. If I logon as that userid I can run
 the backups. But running them as a scheduled task fails. That's what's
 strangesame userid... works logged on...fails as service.


Re: TSM 1st full backup of remote low-bandwidth nodes

2013-01-16 Thread Huebner, Andy
The only realistic solution to complete what you are trying to do over the wire 
is a client side de-duplication solution.  In that case you can seed the backup 
server with local data.
I did not understand the quantity of sites you had to deal with.

Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent 
Christensen
Sent: Wednesday, January 16, 2013 10:25 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

Andy,

I do not totally agree with you here.

The main issue for us is to get all 107 remote sites converted to TSM 
reasonably fast to save maintenance and service fees on the existing backup 
solutions. With the laptop server solution we predict the turn-around time for 
each laptop to be around 2 weeks, which includes sending the laptop to the 
remote site, back up all data, send the laptop back to the backup center, 
export the node. With say 10 laptops this will take at least 6 months. We could 
buy more laptops but we cannot charge the expenses to the remote sites, and we 
are stuck with the laptops afterwards ...

Disaster restores is a very different ball game. Costs will not be a big issue 
and we have approved plans for recovering any remote site within 48 hours, 
which for a few sites includes chartering an aircraft to transport hardware and 
a technician.

 - Bent



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Huebner, Andy
Sent: Tuesday, January 15, 2013 5:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

You should use the same method to seed the first backup as you plan to use to 
restore the data.
When you look at it that way a laptop and big external drive is not that 
expensive.


Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent 
Christensen
Sent: Tuesday, January 15, 2013 9:37 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

Hi,

We are starting up a backup consolidation project where we are going to 
implement TSM 6.3 clients in all our 100+ remote sites and having them back up 
over the WAN to a few well-placed TSM backup datacenters.

We have been through similar projects with selected sites a few times before, 
but this time the sites are larger and the bandwidth/latency worse, so there is 
little room for configuration mishaps ;-)

One question always pops up early in the process: How are we going to do the 
first full TSM backup of the remote site nodes? 
So far we have tried:

 - copy data from the new node (include all attributes and permissions) to 
USB-disks, mount those on a TSM server (as drive X) and do a 'dsmc incr 
\\newnode\z$ -snapshotroot=X:\newnode_zdrive -asnodename=newnode'. This works 
OK and only requires a bunch of cheap high capacity USB disks, but our 
experience is that when we afterwards do the first incremental backup of the 
new node then 20-40 % of the files get backed up again - and we can't figure 
out why.

- build a temp TSM laptop server, send it to the remote site, direct first full 
backup to this server, send it back to the backup datacenter and export the 
node(s). Nice and easy, but requires a lot of expensive laptops (and USB disks, 
the remote sites typically contain 2 to 10 TB of file data) to finish the 
project in a reasonable time frame.

So how are you guys doing the first full backup of a remote node when using the 
WAN is not an option?

 - Bent


Re: TSM 1st full backup of remote low-bandwidth nodes

2013-01-16 Thread Chavdar Cholev
if you have TSMs at remote sites you can use export data on tapes and import.
If you do not have TSMs you can use trial TSM to migrate data ...
You can benefit from current installed TSM clients, and will no need
to install different backup client ...

Regards
Chavdar


On Wed, Jan 16, 2013 at 7:14 PM, Huebner, Andy andy.hueb...@alcon.com wrote:
 The only realistic solution to complete what you are trying to do over the 
 wire is a client side de-duplication solution.  In that case you can seed the 
 backup server with local data.
 I did not understand the quantity of sites you had to deal with.

 Andy Huebner

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent 
 Christensen
 Sent: Wednesday, January 16, 2013 10:25 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

 Andy,

 I do not totally agree with you here.

 The main issue for us is to get all 107 remote sites converted to TSM 
 reasonably fast to save maintenance and service fees on the existing backup 
 solutions. With the laptop server solution we predict the turn-around time 
 for each laptop to be around 2 weeks, which includes sending the laptop to 
 the remote site, back up all data, send the laptop back to the backup center, 
 export the node. With say 10 laptops this will take at least 6 months. We 
 could buy more laptops but we cannot charge the expenses to the remote sites, 
 and we are stuck with the laptops afterwards ...

 Disaster restores is a very different ball game. Costs will not be a big 
 issue and we have approved plans for recovering any remote site within 48 
 hours, which for a few sites includes chartering an aircraft to transport 
 hardware and a technician.

  - Bent



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
 Huebner, Andy
 Sent: Tuesday, January 15, 2013 5:17 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

 You should use the same method to seed the first backup as you plan to use to 
 restore the data.
 When you look at it that way a laptop and big external drive is not that 
 expensive.


 Andy Huebner

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent 
 Christensen
 Sent: Tuesday, January 15, 2013 9:37 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

 Hi,

 We are starting up a backup consolidation project where we are going to 
 implement TSM 6.3 clients in all our 100+ remote sites and having them back 
 up over the WAN to a few well-placed TSM backup datacenters.

 We have been through similar projects with selected sites a few times before, 
 but this time the sites are larger and the bandwidth/latency worse, so there 
 is little room for configuration mishaps ;-)

 One question always pops up early in the process: How are we going to do the 
 first full TSM backup of the remote site nodes?
 So far we have tried:

  - copy data from the new node (include all attributes and permissions) to 
 USB-disks, mount those on a TSM server (as drive X) and do a 'dsmc incr 
 \\newnode\z$ -snapshotroot=X:\newnode_zdrive -asnodename=newnode'. This works 
 OK and only requires a bunch of cheap high capacity USB disks, but our 
 experience is that when we afterwards do the first incremental backup of the 
 new node then 20-40 % of the files get backed up again - and we can't figure 
 out why.

 - build a temp TSM laptop server, send it to the remote site, direct first 
 full backup to this server, send it back to the backup datacenter and export 
 the node(s). Nice and easy, but requires a lot of expensive laptops (and USB 
 disks, the remote sites typically contain 2 to 10 TB of file data) to finish 
 the project in a reasonable time frame.

 So how are you guys doing the first full backup of a remote node when using 
 the WAN is not an option?

  - Bent


SV: SV: Exchange 2010 and Error code ACN5073E and ACN6068I

2013-01-16 Thread Christian Svensson
Hi Del,
I'm talking to IBM Level 2 support at the moment but I think they have go home 
for the day now in UK.
But the last test we did was to run a couple of PowerShell commands and this is 
the output L2 support sow.

2013-01-16 15:18:38.788 [011060] [9320] : TdpExcApi.cpp   (11099): Database 
'DAG1DB1 : Status = Mounted.
2013-01-16 15:18:38.788 [011060] [9320] : TdpExcApi.cpp   (11099): Database 
'DAG1DB2' : Status = Healthy.
2013-01-16 15:18:38.788 [011060] [9320] : TdpExcApi.cpp   (11099): Database 
'DAG1DB3' : Status = Mounted.
2013-01-16 15:18:38.788 [011060] [9320] : TdpExcApi.cpp   (11099): Database 
'DAG1DB4' : Status = Healthy.
2013-01-16 15:18:38.788 [011060] [9320] : TdpExcApi.cpp   (11099): Database 
'DAG1DB5' : Status = Healthy.
2013-01-16 15:18:38.788 [011060] [9320] : TdpExcApi.cpp   (11099): Database 
'dag1_test' : Status = Mounted.
2013-01-16 15:18:38.788 [011060] [9320] : TdpExcApi.cpp   (11099): Database 
'pf01' : Status = .

But when I run following commands.
Get-MailboxDatabaseCopyStatus -Identity DAG1DB5
Get-Mailboxdatabase -Identity DAG1DB5

Name  Status  CopyQueue 
ReplayQueue LastInspectedLogTime   ContentIndex
  LengthLength  
   State   
  --  - 
---    
DAG1DB5\MAIL1Healthy 0 1   
2013-01-16 17:22:34Healthy 
DAG1DB5\MAIL2Mounted 00 
 Healthy 

Name   Server  RecoveryReplicationType  
   
   --  ---  
   
DAG1DB5  MAIL2False   Remote
  

But when I run with Format-List I can see no backup are in progress.
We are now rebooting one server at the time and also Umount/mount each database 
to see if that helps. But we don't think so.

But another thing I sow in my trace log that was different from a working test 
system is following info
2013-01-16 10:16:43.768 [010132] [10524] : psex07ut.cpp(7229): Exit 
psExchangeGetDAGInfo, rc = 0
2013-01-16 10:16:43.768 [010132] [10524] : linklist.h  (1037): Enter 
CLinkedList::getCount() - 0xeeaa0
2013-01-16 10:16:43.768 [010132] [10524] : linklist.h  (1045): Exit 
CLinkedList::getCount(), count = 0
2013-01-16 10:16:43.768 [010132] [10524] : linklist.h  (1037): Enter 
CLinkedList::getCount() - 0xeeaa0
2013-01-16 10:16:43.768 [010132] [10524] : linklist.h  (1045): Exit 
CLinkedList::getCount(), count = 0
2013-01-16 10:16:43.768 [010132] [10524] : linklist.h  (1037): Enter 
CLinkedList::getCount() - 0xeeaa0
2013-01-16 10:16:43.768 [010132] [10524] : linklist.h  (1045): Exit 
CLinkedList::getCount(), count = 0
2013-01-16 10:16:43.768 [010132] [10524] : tdpexcc.cpp (6099): main(): 
ffrc=0, qrVSSQueryOutputList.getCount()=0
2013-01-16 10:16:43.768 [010132] [10524] : tdpexcc.cpp (6103): main(): 
No selectable Exchange VSS components are available.
2013-01-16 10:16:43.768 [010132] [10524] : linklist.h  ( 256): Enter 
CLinkedList::~CLinkedList() - 0xeeea0
2013-01-16 10:16:43.768 [010132] [10524] : linklist.h  ( 698): Enter 
CLinkedList::freeAll() - 0xeeea0
  m_pHead: 0

And the line where the trace says  No selectable Exchange VSS components are 
available. Is where it is different between test and production.

Best Regards
Christian Svensson

Cell: +46-70-325 1577
E-mail: christian.svens...@cristie.se

Säkra återläsningar.



-Ursprungligt meddelande-
Från: Del Hoobler [mailto:hoob...@us.ibm.com] 
Skickat: den 16 januari 2013 15:50
Till: ADSM-L@VM.MARIST.EDU
Ämne: Re: SV: Exchange 2010 and Error code ACN5073E and ACN6068I

Hi Christian,

One thing to check is the Exchange database BackupInProgress flag.
Data Protection for Exchange utilizes this flag to know whether a backup is 
already in progress. I know you said there wasn't, but if the Exchange Server 
thinks there is, it won't allow another to start.

To check this, run this cmdlet:

Get-MailboxDatabase -Status

check the BackupInProgress flag.

BTW... the trace you included is normal and not helpful for this problem. You 
would need to include the part of the trace that runs the Get-MailboxDatabase 
cmdlet.


Thanks,

Del



ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 01/16/2013
03:06:33 AM:

 From: Christian Svensson christian.svens...@cristie.se
 To: ADSM-L@vm.marist.edu,
 Date: 01/16/2013 03:15 AM
 Subject: SV: Exchange 2010 and Error code ACN5073E and ACN6068I Sent 
 by: ADSM: 

Re: TSM 1st full backup of remote low-bandwidth nodes

2013-01-16 Thread Rick Adamson
Bent,
I have been down the road that you are about to travel, only on a smaller scale 
(26 sites).
It was however before TSM offered de-duplication but I am sure that's of little 
help regarding your initial backup. Some of our sites had 256k lines and a few 
had single T1's. All of them had some degree of latency issues.

After proposing several solutions to our management team, none of which were 
attractive for one reason or another the decision was made that we were to kick 
the initial backups off across the WAN. They could only run after close of 
business each day and we were held responsible for canceling them during 
business hours. Fortunately, they could run all weekend long or they may still 
be running. Eventually they completed but that was only the beginning of the 
nightmare. Daily backups would often run all night and bleed into business 
hours at which time management would get a call from either frustrated end 
users or our network team complaining about the processes bringing the WAN to 
its knees. I won't even get into the whole nightmare of performing a simple 
restore!

After wrestling with the poor performance, customers at the sites screaming 
about the backups hogging the WAN, and the TSM Admins reminding how ugly 
recovery would be, the decision was made to budget for increasing the 
bandwidth. In the end I think management regretted not procure the funds needed 
to address WAN speed up front as they had to endure an enormous amount of 
criticism and bad PR due to the aforementioned issues.

While I understand that none of this may be what you want to hear, or the 
solution to your immediate need, the bottom line is that the key to a competent 
centralized backup solution is to procure the tools that will make is operate 
efficiently. Sure increasing bandwidth across 100+ sites is a costly venture, 
but I will go out on a limb and guess that it's much cheaper and more efficient 
than buying 100+ backup servers, the associated licensing and storage for each 
location. Not to mention the man hours to manage them.

On a closing note regarding our implementation; once management committed to, 
and budgeted for, the higher performing WAN links, and seen what it meant to 
the solution operationally their perspective changed. People began bragging 
about the solution rather than being constantly beat-up about it.

~Rick




-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Huebner, Andy
Sent: Wednesday, January 16, 2013 12:14 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

The only realistic solution to complete what you are trying to do over the wire 
is a client side de-duplication solution.  In that case you can seed the backup 
server with local data.
I did not understand the quantity of sites you had to deal with.

Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent 
Christensen
Sent: Wednesday, January 16, 2013 10:25 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

Andy,

I do not totally agree with you here.

The main issue for us is to get all 107 remote sites converted to TSM 
reasonably fast to save maintenance and service fees on the existing backup 
solutions. With the laptop server solution we predict the turn-around time for 
each laptop to be around 2 weeks, which includes sending the laptop to the 
remote site, back up all data, send the laptop back to the backup center, 
export the node. With say 10 laptops this will take at least 6 months. We could 
buy more laptops but we cannot charge the expenses to the remote sites, and we 
are stuck with the laptops afterwards ...

Disaster restores is a very different ball game. Costs will not be a big issue 
and we have approved plans for recovering any remote site within 48 hours, 
which for a few sites includes chartering an aircraft to transport hardware and 
a technician.

 - Bent



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Huebner, Andy
Sent: Tuesday, January 15, 2013 5:17 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

You should use the same method to seed the first backup as you plan to use to 
restore the data.
When you look at it that way a laptop and big external drive is not that 
expensive.


Andy Huebner

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bent 
Christensen
Sent: Tuesday, January 15, 2013 9:37 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM 1st full backup of remote low-bandwidth nodes

Hi,

We are starting up a backup consolidation project where we are going to 
implement TSM 6.3 clients in all our 100+ remote sites and having them back up 
over the WAN to a few well-placed TSM backup datacenters.

We have been 

Re: SV: SV: Exchange 2010 and Error code ACN5073E and ACN6068I

2013-01-16 Thread Del Hoobler
Hi Christian,

It appears you did not run the cmdlet with the flags I requested.

More specifically, you *must* add the -Status flag, otherwise
you won't get the actual BackupInProgress setting.

Try this:


   Get-Mailboxdatabase -Identity DAG1DB5 -Status | select
Name,BackupInProgress


Thanks,

Del



ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 01/16/2013
01:37:10 PM:

 From: Christian Svensson christian.svens...@cristie.se
 To: ADSM-L@vm.marist.edu,
 Date: 01/16/2013 03:03 PM
 Subject: SV: SV: Exchange 2010 and Error code ACN5073E and ACN6068I
 Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu

 Hi Del,
 I'm talking to IBM Level 2 support at the moment but I think they
 have go home for the day now in UK.
 But the last test we did was to run a couple of PowerShell commands
 and this is the output L2 support sow.