I've had a bit of a sneak peak into what is coming from IBM and I must say
it's the first time in a while that I got excited for what is coming from
IBM in this domain. It's a major change and I think it was needed but the
architecture means that it's very capable, at least in the datacenter, from
Hi Eric, I am by no means a programmer but I have done some work with
powershell. I would while loop thru the servers and use something like
transcribe (powershell command) to capture and log the output of the dsmc
command to a temp file on disk after each dsmc attempt and then use
select-string (
Hi all,
Quick question.
Has anybody ever witnessed client schedules to go into a missed state
because the replication target is down?
I have seen this happen last weekend on a very recent SP 8.1 version
running on Linux, no other changes where made and missed schedules are very
rare in this
ine to limit
> the # of containers processed in one day
>
>
> Del
>
>
>
>
> "ADSM: Dist Stor Manager" wrote on 08/01/2020
> 08:32:13 AM:
>
> > From: Stefan Folkerts
> > To: ADSM-L@VM.MARIST.EDU
> >
This automatic defragmentation needs an additional parameter to limit the
number of containers it will defrag in a day or a max percentage of actual
space used that when reached will stop the defrag so it can't fill up the
disk to 100% anymore.
I see too many cases of Spectrum Protect filling the
The amount of TB's is important of course because the system must be able
to provide the storage amount but its also about performance.
We use IBM V5030E systems and under maximum load with 48 nearline drives
these systems are at about 30% load so they are overkill for 48 drive
systems, the
I've created my own in the past using a Linux system, bash script and the
IBM tape tools.
I would create a loop to mount all slots to a drive and write until full,
then eject back to the slot and go to the next element number in the
library and do the same.
It would take a long time to do this and
You can't convert, you have to retrieve to disk and re-archive to Spectrum.
I've done this for other systems and also moved archives from Spectrum
Protect to other systems.
Scripting is what I used, inventory all the archives via a script and
create a script that retrieves them to a logical
I have asked IBM at the SP Symposium in Germany two years ago what I should
expect performance wise when I run my SP server as a VM.
They gave me an indication of -20% from the blueprints based on blueprints
hardware + vSphere 6.5 or higher.
I've been running a few SP servers in our cloud
erver becomes slow. We brought it to the attention of the
> developers, but no response yet...
> Thanks for your help!
>
> Kind regards,
> Eric van Loon
> Air France/KLM Storage & Backup
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
Eric,
What happens when you benchmark the DB volumes using the tool provided with
the blueprints on an idle system, does the system also slow down with
commands such as q stgpool or does it stay fast when the benchmark is
running on all volumes?
Also, what kind of a result does the benchmark give
I think one protect only works for B/A clients and VM's, not for TDP's, but
I could be wrong or that information might be outdated (I think this was
the case for 8.1.7).
So i'm thinking this might require dumps to disk and archives with removal
via the B/A client for the montly's and yearly's.
t;
> -Rick Adamson
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager On Behalf Of Stefan
> Folkerts
> Sent: Monday, July 8, 2019 3:23 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Spectrum Protect and The mystery of data reduction
> differences on th
barely anything to be dedupped.
>
> Regards,
>
> Karel
>
> On Mon, 8 Jul 2019 at 08:16, Stefan Folkerts
> wrote:
>
> > Hi all,
> >
> > I'm seeing something very strange at a customer site that has two
> Spectrum
> > Protect servers.
> > On
Hi all,
I'm seeing something very strange at a customer site that has two Spectrum
Protect servers.
One receives backups and is the replication source.
The other is the replication target for that single source and doesn't
receive any backup data.
All data goes into a single containerpool on each
> for an
> SQL backup and shares some background to this type of error, yet you might
> have luck in this case with a fullbackup.
>
> --
> Michael Prix
>
> On Wed, 2019-06-19 at 11:25 +0200, Stefan Folkerts wrote:
> > Thanks Michael, we are opening a pmr @IBM to see what t
chael Prix
>
> On Tue, 2019-06-18 at 08:51 +0200, Stefan Folkerts wrote:
> > Hi all,
> >
> > I've got a new one.
> > A customer of ours has set the date of the SP server to December 2019.
> > As you may know, it's currently not December. :-)
> > They accept
Hi all,
I've got a new one.
A customer of ours has set the date of the SP server to December 2019.
As you may know, it's currently not December. :-)
They accepted the date change in SP with an accept date.
Now the issue is when they move the date back to what it actually is they
get this error:
Hi all,
I'm wondering if this is a weird database error or if my syntax isn't
correct and the error on my syntax isn't handled correctly.
I have a customer that is seeing absurd amounts of these warnings in the
activity log and, of course, the daily reporting:
06/03/2019 13:20:47 ANR3692W A
This is fantastic, do you plan on maintaining these this when new versions
are released?
On Fri, May 24, 2019 at 8:47 AM Leif Torstensen wrote:
>
> Hi
>
> We done a lot of work making WindowsPE boot iso for HyperV and VMware
> baremetal recovery (properly also usable on physical server) and
Did you use something like iperf with a long and heavy load? a bad nic or
driver might cause this, so it might still be the network.
On Mon, May 13, 2019 at 4:15 PM Bjørn Nachtwey
wrote:
> Hi all,
>
> we planned to switch from COPYPOOL to Replication for having a second
> copy of the data,
I think it depends on the size/load on the server.
For smaller environments we even use the midrange read intensive SSD's in
raid 1 (2 of them) or sometimes even raid 5 with a spare and they run for
years, this works just fine for S en M blueprints.
For very intensive environments we tent to use
ot;strict-mode" on the 8.1.7 instance, but not on the 7.1.3 instance.
> This will cause the error message "down level".
>
> Regards, Uwe
>
> -Ursprüngliche Nachricht-----
> Von: ADSM: Dist Stor Manager Im Auftrag von Stefan
> Folkerts
> Gesendet: Montag, 8. April 2019 19:3
; On Mon, Apr 8, 2019 at 12:30 PM Stefan Folkerts >
> wrote:
>
> > I don’t think so, the q server output says that it is transitional.
> >
> > On Mon, 8 Apr 2019 at 16:35, Zoltan Forray wrote:
> >
> > > Probably due to the enforced-by-default TLS/SSL level of
view.wss?uid=swg22004844
>
> On Mon, Apr 8, 2019 at 3:06 AM Stefan Folkerts
> wrote:
>
> > Hi all,
> >
> > This page of the 8.1.7 knowledge center:
> >
> >
> >
> https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.7/srv.admin/r_adm_repl_compat.html
&
Hi all,
This page of the 8.1.7 knowledge center:
https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.7/srv.admin/r_adm_repl_compat.html
States:
Before you set up replication operations with IBM Spectrum Protect™, you
must ensure that the source and target replication servers are compatible
Steven,
I can help you out if you are looking for a tool to validate Spectrum
Protect for Virtual Environments vSphere backups with a comprehensive
validation that includes a test against an SLA and a quick test to see if
the VM actually boots after the restore and many other features.
I'm not
delete filespaces on primary that have not been on on the replica?
I have seen a lot of that going on, you get stale filespaces on the replica
with the metadata but due to deduplication the impact is largest on the
database.
If this is not it I would run the IBM database reorg perl script, you
ocview.wss?uid=swg22013355
>
> Follow me on: Twitter, developerWorks, LinkedIn
>
>
>
> -Original Message-
> From: Stefan Folkerts
> Sent: Tuesday, October 23, 2018 07:24 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] I keep getting lan-free backup errors, but we have
> mig
Hi,
Quick question, I can't seem to solve this little problem.
I've migrated a couple of nodes from making lanfree backups to making
lan-based backups over 10Gb/s to another server (exported node definitions
only).
Everything works fine except for the fact that actlog is filled with these
Any clarity on this yet Hans?
On Tue, Oct 16, 2018 at 5:26 PM Hans Christian Riksheim
wrote:
> Stefan.
>
> Not a known issue. The PMR was submitted last Thursday and it was
> transferred to L2 today.
>
>
> Hans Chr.
>
> On Tue, Oct 16, 2018 at 3:01 PM Stefan Folkert
IMHO Spectrum Protect cloud pools are more for deep archive/low tier
options, not for secondary server DR options.
It's simply to slow for 90%+ of the DR cases and I believe that's why it's
not a traditional copypool option.
You tier data to the cloud because you don't want to keep buying frame
Hi Hans,
Did IBM support report that this is a known issue?
Regards,
Stefan
On Mon, Oct 15, 2018 at 4:06 PM Hans Christian Riksheim
wrote:
> Just putting it out there.
>
> We upgraded our SP servers to 8.1.6.0 and now all our file pools with
> deduplication are corrupt. Several systems all
And with " Okay, having used traditional disks with the containerpool I
will say that that is not a good combination" I mean for the Spectrum
Protect database and activelog, not for the storage of actual data. :-)
On Fri, Sep 28, 2018 at 10:06 AM Stefan Folkerts
wrote:
> Okay
> The lack of SSD's for DB is another reason we can't afford to use
> containers on production servers. Only 1-production server has 1.9TB SSD's
> and with replication and dedup it has quickly consumed 86% of it and we had
> to stop dedup.
>
>
>
> On Thu, Sep 27, 2018 at 1:56 PM St
> node and its backups was unnecessarily complicated but we finally deleted
> it and the directory/container.
>
> On Thu, Sep 27, 2018 at 8:43 AM Stefan Folkerts >
> wrote:
>
> > Zoltan,
> >
> > That is very strange, I've used the containerpool as a replication tar
LE storagepool.
>
> Perhaps next year when we replace one of our local ISP servers with a much
> bigger/beefier (72-threads and 120TB internal disk storage).
>
> On Wed, Sep 26, 2018 at 6:17 AM Stefan Folkerts >
> wrote:
>
> > Zoltan,
> >
> > I'm not sure
Zoltan,
I'm not sure I understand your issues, we use directory containerpools for
all but a few of our Spectrum Protect customers and it's miles ahead of
what the fileclass-based storagepool bring in terms of performance,
Spectrum Protect database impact (size wise). Yes, it isn't capable of
nsistent snapshot using MariaDB and
> LVM. Then you can backup the snapshot and in case of a disaster restore
> that. Now, I’ve never attempted this, and I don’t know how to do it, but it
> seems to be the only viable acceptable solution.
> >
> > > Op 4 sep. 2018, om 09:
Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: dinsdag 4 september 2018 11:29
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: MariaDB backups using modern MariaDB methods and high
> performance restores
>
> Yes we did
at 11:20 AM Uwe Schreiber
wrote:
> Hi Stefan,
>
> did you have a look on Repostor DATA Protector?
>
> Regards Uwe
>
> > Am 04.09.2018 um 09:49 schrieb Stefan Folkerts <
> stefan.folke...@gmail.com>:
> >
> > Hi all,
> >
> > I'm
Hi all,
I'm currently looking for the best backup option for a large and extremely
transaction-heavy MariaDB database environment. I'm talking about up to
100.000.000 transactions a year (payment industry).
It needs to connect to Spectrum Protect to store it's database data, it is
acceptable if
We have just build a setup at a large university that replicates (on
average) 6.9TB per hour but that data is deduplicated on the source server.
If you have 10Gb/s+ bandwith and no limitation on the performance on the
source you should but able to handle 7TB per day (mixed workload, not only
tiny
I'm afraid Remco is right, the server can't (or simply won't) uncompress
the older client-side compression so it will convert it compressed to the
containerpool and you will most likely get very poor deduplication on that
data.
What kind of data is it and for how long do you need to keep it?
On
gt; -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, April 12, 2018 2:10 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: disabling compression and/or deduplication for a client
> backing up aga
lectronic transmission in error, please notify the
> sender by e-mail, telephone or fax at the numbers listed above. Thank you.
>
> ********
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
>
hole working together, so far
> without real success : performance is horrible :-(
>
> Cheers.
>
> Arnaud
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, April 05, 2018 5:48 P
Hi,
With the directory containerpool you cannot, for as far as I know, disable
an attempt to deduplicate the data and if the data is able to deduplicate
it will be deduplicated.
You can, however, disable compression on the storagepool-level. If you
disable it on the containerpool client-side
TSM
> Include Snapshot Retry
> TSM500*/opt/tivoli/tsm/client/ba/bin/dsm.sys
> No DFS include/exclude statements defined.
>
> Any directory .TsmCacheDir is excluded by software itself
>
> Martin J.
>
> "ADSM: Dist Stor Manager" <
For as far as I know nothing is excluded by the code itself and I think
this is a policy thing.
Since you can install the software in different locations and you can use
different names for different files (ie log files) IBM can never be 100%
sure that the file it is hardcoded to exclude is the
I know 8.1.4.0 has a defrag option, that moves the content to existing
containers that have space instead of moving the container as a whole, that
might be a solution.
I do think it's new for 8.1.4.0 and I don't think there is a way to get rid
of them before that release.
On Wed, Feb 21, 2018 at
2 processors) and 384GB of
> ram each.
>
> On Sun, Feb 18, 2018 at 10:37 AM, Stefan Folkerts <
> stefan.folke...@gmail.com
> > wrote:
>
> > Hi Tom, are the Exchange servers virtualized on vSphere?
> >
> > On Sat, Feb 17, 2018 at 12:
Hi Tom, are the Exchange servers virtualized on vSphere?
On Sat, Feb 17, 2018 at 12:55 AM, Tom Alverson
wrote:
> >
> >
> > We are trying to speed up our Exchange backups that are currently only
> using about 15% of the network bandwidth. Our servers are running Windows
I understand Remco's point but I think TSM can handle 7 years of retention
just fine and that the format is the bigger challenge here...full VM.
I'm sure there are rules that require you to do this but 7 years is crazy
long for full VM backups if it's about a lot of source data.
So far i've always
>I don't you can use cloud storage as a copypool
That was supposed to be "I don't think you can use cloud storage as a
copypool..."
On Fri, Feb 2, 2018 at 12:40 PM, Stefan Folkerts <stefan.folke...@gmail.com>
wrote:
>
> I don't you can use cloud storage as a copypo
I don't you can use cloud storage as a copypool so that would mean you are
placing your primarypool in the cloud and limiting (in an extreme way) your
restore performance.
It is fast enough for archival purposes (if you set it up correctly and
have enough of a local buffering pool and bandwith)
Hi,
Has anybody seen any information from IBM in relation to Spectre & Meltdown
patching for Spectrum Protect servers?
We have a customer who has found that systems that do lot's of small IO's
that performance can drop 50% on intel systems, these were seen with
synthetic benchmarks, not actual
I agree with Steven, the only solutions I can think of are build it or buy
it (if it's available, I never heard of anything like this).
I would design and build a low-level solution myself and test that with a
few users, if that works try and find somebody that can actually create a
pretty GUI
consideration for a future release.
>
>
> Del
>
>
>
>
> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 12/15/2017
> 02:29:29 AM:
>
> > From: Stefan Folkerts <stefan.folke...@gmail.com&g
Hi all,
I have very little experience with Hyper-V backups up until now but we have
a customer who is interested in this functionality using Spectrum Protect.
My question is, can Hyper-V backups but especially restores utilize
multiple session to and from the server per backup and restore
Yup, that's the trick, it seems Remco and I wear the same t-shirt. :-)
On Thu, Dec 7, 2017 at 3:04 PM, Remco Post wrote:
> Hi Eric,
>
> 1- on the target copy the domain, change copygroup destinations to
> filepool, act poli
> 2- on the target, reg node on the copied dom
> 3-
That's a way to do it, you do need to reduce max and offline reorg the
database after the conversion to get everything to run at 100% speed
without wasted database space after the conversion
I prefer to upgrade the old server to 7.1.7 and replicate the data from
tape/VTL/whatever to the new server
Steven,
It's probably not the reply you are looking for but Spectrum Protect Plus
can do this, you can have multiple SLA's attached to a VM and have one SLA
do a backup every day and retain it for say a month and have another SLA
create a backup every year and retain it for 7 years.
With
Hi Robert,
That's not the way you get vault tape back from the vault.
DR will put vault tapes in vault retrieve for you, you don't have to do
that by yourself, it's will adhere to this setting (
Nice little challenge this one.
I would build something that when they place a trigger-file somewhere that
executes a fixed macro that they can’t modify or read on another system.
The macro runs a clientaction within spectrum protect that starts the
backup.
So they only place the file somewhere,
Roger,
There has been a discussion about a few things you are asking questions
about just a day or so ago, I gave my view on the client and admin
situation.
I will use the same old and new definitions as you did.
It basically boils down to this for client and admin sessions
Once a node uses the
Object-level restores for Sharepoint from Spectrum Protect are often done
by using Docave's solution for Sharepoint, it works really well and is
fairly simple to implement.
On Wed, Oct 4, 2017 at 2:41 PM, Kizzire, Chris
wrote:
> Is anyone using SP 8.1.x to backup &
en true, what
> is the mix of client/server/config that might cause communications to be
> disrupted?
>
>
> On Tue, Oct 3, 2017 at 4:41 AM, Stefan Folkerts <stefan.folke...@gmail.com
> >
> wrote:
>
> > A 8.1.2.0 server should work with older clients as long as the no
A 8.1.2.0 server should work with older clients as long as the node is in
transitional mode (q node f=d), when they jump to strict because, for
example, a 8.1.2 (or higher) client version restored something from that
node it jumps to strict and pre 8.1.2 clients will no longer be able to
connect
Andrew,
Just to be clear, because there is a bit of confusion here (and other
places) about the new security restrictions of the 8.1.2 release in regards
to older clients.
When we upgrade a Spectrum Protect server to 8.1.2 and use a mix of older
and newer clients versions, the operations center
option: compression YES
>
> I remind that I backup directly to LTO7 tape (ULTRIUM7C).
>
> Best Regards
>
> Robert
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, Sep
, 2017 at 3:23 PM, rou...@univ.haifa.ac.il <
rou...@univ.haifa.ac.il> wrote:
> Great Stefan
>
> What about the keep mount option need it ?
>
> Best regards
>
> Robert
>
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
>
> ---- Message d'origi
> > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of >
> > mount points. (SESSION: 20900)
> >
> > Maybe the option in node : Keep Mount Point?: TO YES ( Now is NO)
> >
> > Regards
> >
> > Robert
> >
> >
&g
It might seem pretty obvious but you need to set resourceutilization to 1
and not 3.
Also, the amount of mountpoints doesn't equal the amount of tapes a backup
can use, it limits the amount of concurrent mountpoints a backup can use.
On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il <
for long term storage and
> data governance with scale and efficiency".
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
&g
-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, August 31, 2017 1:06 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Spectrum Protect for VE - how to get started
>
> Rick,
>
> I would run the 8.1.1
Rick,
I would run the 8.1.1 VE version if your vSphere stack supports it, the
biggest improvements in my eyes above the 7.1 version are the restore
performance that can go 5x in the right configuration (in my experience)
and the tagging support (if you vShpere folder structure will work with it,
1.7 and we have a smattering of 6.3 and
> older unsupported clients (old Solaris, RHEL x32, Windows 2003 and until
> recently an IRIX box) so 8.1.x is not on any current schedule.
>
> On Wed, Aug 23, 2017 at 8:14 AM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> &
I read.
>
> Perhaps it is due to TLS 1.2 now turned ON by default. Either way, more
> details/specifics from IBM would be helpful.
>
> On Wed, Aug 23, 2017 at 4:21 AM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> > I agree with Eric and I ha
I agree with Eric and I have 8.2 VE setups connected to 7.1 servers at
multiple sites, i've never seen any issues. I do find the documentation
strange if it say's you need to upgrade your clients before upgrading the
server. The only thing that makes sense to me is client-side dedup with
ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: den 8 augusti 2017 14:54
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] manually set repltcpserveraddress for clients from
> server
>
> Great, there is a solution!
> Strange thing is
Great, there is a solution!
Strange thing is that the client still places the old IP in it's
configuration file..very strange.
On Tue, Aug 8, 2017 at 2:31 PM, Anders Räntilä wrote:
> Hi,
>
> SET FAILOVERHLADDRESS xxx
>
> /Anders Räntilä
>
>
>
Hi all,
I've been looking and reading but I can't find it so it might not exist
yet! :-)
In our current site the clients can't reach the replication server because
the Spectrum Protect servers replicate via a separate network that the
clients can't reach.
I would like to be able to setopt
Hi Eric,
I've been in this situation before a while back and what I did was about
the same, I did a query container from a for loop based on the output from
the find command on the filesytems I believe and every container I did not
find in Spectrum protect was deleted (rc != 0 on the dsmadmc q
Remco, all this information about NDMP and the current restrictions of the
containerpool are well documented. I believe it's (at least in part) due to
the agile development process. We get our new pools and stuff quicker but
it will take a bit of time for them to get all the bells and whistles.
I
Same here, we have more than a few clients running NAS solutions going up
to I believe about 200TB and the backups are done via a windows server
connected to cifs shares.
file attributes might become an issue if the NAS is sharing via CIFS and
NFS but other than that it seems to work okay, it's a
The reason is usually given just before the first line in your log on the
source server, so the line above the one with the ANR0986I code.
On Wed, Jul 26, 2017 at 8:07 PM, Tim Brown wrote:
> TSM node replication fails , no indication why on source or target server?
>
>
!
On Thu, 27 Jul 2017 at 06:25, Stefan Folkerts <stefan.folke...@gmail.com>
wrote:
>
> The 2TB archive log has never been completely full in my case no, it's IBM
> Blueprint spec and it gives you some time when the database backup breaks
> for whatever reason, also, it's just 2T
ver, is highly unlikely. There has to be a less expensive
> way to boost performance. Obviously getting more CPU threads is important.
>
> Thank you for all your help/knowledge. It is greatly appreciated!
>
> On Wed, Jul 26, 2017 at 3:40 PM, Stefan Folkerts <
> stefan.folke...@
average is still >25.
>
> I really think the additional memory is killing this box. It was never
> this slow or overloaded before!
>
> On Wed, Jul 26, 2017 at 8:26 AM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> > Oh, I just now read the 16 threads correctly
t; 192GB.
>
> On Wed, Jul 26, 2017 at 3:16 AM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> > Interesting, why would NFS be the problem if the deletion of objects
> > doesn't really touch the storagepools?
> >
> > I would wager that a str
Interesting, why would NFS be the problem if the deletion of objects
doesn't really touch the storagepools?
I would wager that a straight up dd on the system to create a large file
via 10Gb/s on NFS would be blazing fast but the database backup is slow
because it's almost never idle, it's always
g 15+ hours with very little load (stopped all replications
> since they were becoming never-ending). Deleting of lots of objects
> (20-million is one example) is running into many days if not a week.
>
> On Tue, Jul 25, 2017 at 2:57 PM, Stefan Folkerts <
> stefan.folke...@gmail.
at 7:58 PM, Zoltan Forray <zfor...@vcu.edu> wrote:
>
>> The two database filesystems (1TB each) are on internal, 15K SAS drives.
>>
>> On Tue, Jul 25, 2017 at 1:34 PM, Stefan Folkerts <
>> stefan.folke...@gmail.com>
>> wrote:
>>
>> > My
e, Jul 25, 2017 at 1:34 PM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> > My question would be on what type of storage is the Spectrum Protect
> > database located.
> > Second question, have you run the IBM blueprint benchmark tool on the
> >
My question would be on what type of storage is the Spectrum Protect
database located.
Second question, have you run the IBM blueprint benchmark tool on the
storagepool and database storage, and if so, what were the results?
On Mon, Jul 24, 2017 at 3:55 PM, Sasa Drnjevic
in the
> right direction.
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: donderdag 6 juli 2017 9:13
> To: AD
Hi Eric,
I think some Linux sysctl tuning might be required to raise the Linux OS
tcp window limit from 224.
with scaling the system can adjust it if needed, that might work for the
TSM client as well.
net.ipv4.tcp_window_scaling = 1
Regards,
Stefan
On Wed, Jul 5, 2017 at 4:30 PM, Loon,
It's a safety mechanism I believe. You can use decommission vm to
"decommission" a filespace, if you use that the data in the filespace will
expire using policy setting on the replica as well.
There is no way to actually delete data on the replication target with a
delete filespace on the source.
.
On Wed, Jun 28, 2017 at 4:43 PM, Stefan Folkerts <stefan.folke...@gmail.com>
wrote:
>
>
> Does anybody know what happens to Spectrum Protect for VE when the vCenter
> switches to a certificate signed by the customers root CA?
>
> Regards,
>Stefan
>
>
1 - 100 of 366 matches
Mail list logo