On Thu, Jul 27, 2017 at 12:56 AM, Stefan Folkerts wrote:
> The read intensive ssd's can be the enterprise value type from lenovo,
I will pitch the idea but am pretty sure I will be wasting my time. There
are 2-other TSM servers that must be replaced/budgeted next year (we are
state budget July
On Thu, Jul 27, 2017 at 2:24 AM, Chavdar Cholev
wrote:
> did you try server instrumentation... if there is an issue with storage or
> network you should be able to identify from server instrumentation output..
>
Not yet. My first objective right now is to upgrade from 7.1.6.3 to
7.1.7.300 to add
Hi Zoltan,
did you try server instrumentation... if there is an issue with storage or
network you should be able to identify from server instrumentation output..
Chavdar
On Thursday, July 27, 2017, Stefan Folkerts
wrote:
> Oh, and one more thing.
> About not putting SSD's in the replication ser
Oh, and one more thing.
About not putting SSD's in the replication server, I think that might not
be a smart place to save money. I understand the reasoning behind it but
I've seen enough trouble with spinning disks in replicating and
deduplicating setups to want to try and warn people and explain
The 2TB archive log has never been completely full in my case no, it's IBM
Blueprint spec and it gives you some time when the database backup breaks
for whatever reason, also, it's just 2TB of slow nearline storage so it
doesn't cost much at all.
Have you done something like run a dd on the NFS ar
2TB archlog? I have never had more that 400GB on any of my systems and
have never filled up any of them, until now. You must have a huge amount
of backups.
Per your suggestion, we are running nmon for a 24-hour period to see what
it comes up with. I am finding that running the DBBackup locally
Yes, a 300GB archivelog is tiny, that won't work for anything but the
smallest of environments, a believe a medium sized server has a 2TB archive
log.
database backups take a lot of extra time when reorgs and/or (for example)
dereference processes are running on 15K database disks, the system simpl
Another point of interest is the archlog filesystem. We originally had it
at 300GB but kept constantly overflowing & crashing since the DB backups
that trigger at 80% wouldn't finish (>5-hours) before it reached 100%. So
we recently increased it to 1TB. Now, the last DBbackup has been running
fo
It is 2-Xeon X5560 CPU (4-cores / 8-threads each).
I would have thought for a replication target server it would be enough
power - not like my other servers that handle hundreds of backup sessions
every day!
We've been looking at replacing it with some old Google Appliance servers
we have decommi
Oh, I just now read the 16 threads correctly, I was thinking you wrote 16
cores!
8 cores is far below specification if your running M-size blueprint ingest
figures.
I've seen 16 core intel servers (2016 spec xeon CPU's) go up to 70%
utilization so that kind of load would never work on 8 cores, but
I kinda feel the same way since my networking folks say it isn't the 10G
links (Xymon shows peaks of 2Gb), eventhough at it's peak processing load
it would be handling 5-TSM servers sending replications across the same 10G
links also used for the NFS.
If the current processes ever finish (delete o
Interesting, why would NFS be the problem if the deletion of objects
doesn't really touch the storagepools?
I would wager that a straight up dd on the system to create a large file
via 10Gb/s on NFS would be blazing fast but the database backup is slow
because it's almost never idle, it's always b
You're welcome, happy to help.
Deleting objects is very database and active log intensive, but it also
hits the CPU, that said, I've never seen a 16 core machine really struggle
on CPU within the blueprint specs, even with compression enabled on the
containerpool running maximum backup performance
Thanks for the suggestions. I am looking at the "blueprint" stuff and it
looks pretty heavy-duty. I will look into running nmon. Things seem to
have gotten worse since I upgraded the memory. DB backups to NFS/ISILON
are now running 15+ hours with very little load (stopped all replications
since
Another thing you could do besides the benchmark is run nmon in batch mode
on the Linux Spectrum Protect servers and analyze that, run it for an hour
or 2 when the load is heavy, I could help you with the output if you could
use some help, no problem.
On Tue, Jul 25, 2017 at 8:49 PM, Stefan Folker
How many drives in what kind of a raid setup? what kinds of performance are
you getting from them, do you have any idea?
I have had nothing but issues with performance on deduplicating setups,
especially with replication in play when you use 15K disks.
I've had setups with 24x15K drives in raid 10
The two database filesystems (1TB each) are on internal, 15K SAS drives.
On Tue, Jul 25, 2017 at 1:34 PM, Stefan Folkerts
wrote:
> My question would be on what type of storage is the Spectrum Protect
> database located.
> Second question, have you run the IBM blueprint benchmark tool on the
> st
My question would be on what type of storage is the Spectrum Protect
database located.
Second question, have you run the IBM blueprint benchmark tool on the
storagepool and database storage, and if so, what were the results?
On Mon, Jul 24, 2017 at 3:55 PM, Sasa Drnjevic
wrote:
> Not sure of cou
Not sure of course...But, I would blame NFS
Did you check the negotiated speed of your NFS eth 10G ifaces?
And that network?
Regards,
--
Sasa Drnjevic
www.srce.unizg.hr
On 24.7.2017. 15:49, Zoltan Forray wrote:
> 8-cores/16-threads. It wasn't bad when it was replicating from 4-SP/TSM
> server
8-cores/16-threads. It wasn't bad when it was replicating from 4-SP/TSM
servers. We had to stop all replication due to running out of space and
until I finish this cleanup, I have been holding off replication. So, the
deletion has been running standalone.
I forgot to mention that DB backups are
On 24.7.2017. 15:25, Zoltan Forray wrote:
> Due to lack of resources, we have had to stop replication on one of our SP
> servers. The replication target server is 7.1.6.3 RHEL 7, Dell T710 with
> 192GB RAM. NFS/ISILON storage.
>
> After removing replication from the nodes on source server, I have
Due to lack of resources, we have had to stop replication on one of our SP
servers. The replication target server is 7.1.6.3 RHEL 7, Dell T710 with
192GB RAM. NFS/ISILON storage.
After removing replication from the nodes on source server, I have been
cleaning up the replication server by deleting
22 matches
Mail list logo