Another thing you could do besides the benchmark is run nmon in batch mode on the Linux Spectrum Protect servers and analyze that, run it for an hour or 2 when the load is heavy, I could help you with the output if you could use some help, no problem.
On Tue, Jul 25, 2017 at 8:49 PM, Stefan Folkerts <[email protected]> wrote: > How many drives in what kind of a raid setup? what kinds of performance > are you getting from them, do you have any idea? > > I have had nothing but issues with performance on deduplicating setups, > especially with replication in play when you use 15K disks. > I've had setups with 24x15K drives in raid 10 in a V7000 and I still was > having a hard time getting all the work done, as soon as you drop in some > SSD's you set, but I suppose they can work on small setups. > I would never use 15K drives again for any kinds of disk based > deduplicating setup however, SSD's are not that expensive anymore, you can > use read intensive SSD's in most cases and be fine. > > Really, the blueprint benchmark tool works fine and gives a good > indication of your base disk performance, if it's far below what's > described for the blueprint that's probably the problem. > I would put my money on the 15K drives for the database and my suggestion > (in the case of insufficient database performance) would be place the > database and the active log on SSD's > > > > On Tue, Jul 25, 2017 at 7:58 PM, Zoltan Forray <[email protected]> wrote: > >> The two database filesystems (1TB each) are on internal, 15K SAS drives. >> >> On Tue, Jul 25, 2017 at 1:34 PM, Stefan Folkerts < >> [email protected]> >> wrote: >> >> > My question would be on what type of storage is the Spectrum Protect >> > database located. >> > Second question, have you run the IBM blueprint benchmark tool on the >> > storagepool and database storage, and if so, what were the results? >> > >> > On Mon, Jul 24, 2017 at 3:55 PM, Sasa Drnjevic <[email protected]> >> > wrote: >> > >> > > Not sure of course...But, I would blame NFS >> > > >> > > Did you check the negotiated speed of your NFS eth 10G ifaces? >> > > And that network? >> > > >> > > Regards, >> > > >> > > -- >> > > Sasa Drnjevic >> > > www.srce.unizg.hr >> > > >> > > >> > > On 24.7.2017. 15:49, Zoltan Forray wrote: >> > > > 8-cores/16-threads. It wasn't bad when it was replicating from >> > 4-SP/TSM >> > > > servers. We had to stop all replication due to running out of space >> > and >> > > > until I finish this cleanup, I have been holding off replication. >> So, >> > > the >> > > > deletion has been running standalone. >> > > > >> > > > I forgot to mention that DB backups are also running very long. >> 1.5TB >> > DB >> > > > backup runs 8+hours to NFS storage. These are connected via 10G. >> > > > >> > > > On Mon, Jul 24, 2017 at 9:41 AM, Sasa Drnjevic < >> [email protected]> >> > > > wrote: >> > > > >> > > >> On 24.7.2017. 15:25, Zoltan Forray wrote: >> > > >>> Due to lack of resources, we have had to stop replication on one >> of >> > our >> > > >> SP >> > > >>> servers. The replication target server is 7.1.6.3 RHEL 7, Dell >> T710 >> > > with >> > > >>> 192GB RAM. NFS/ISILON storage. >> > > >>> >> > > >>> After removing replication from the nodes on source server, I have >> > been >> > > >>> cleaning up the replication server by deleting the filespaces for >> the >> > > >> nodes >> > > >>> we are no longer replicating. >> > > >>> >> > > >>> My issue is the delete filespaces on the replication server is >> taking >> > > >>> forever. It took over a week to delete one filespace with >> 31-million >> > > >>> objects? >> > > >> >> > > >> >> > > >> That is definitely tooooo loooong :-( >> > > >> >> > > >> It would take 6-8 hrs max, in my environment even under "standard" >> > > load... >> > > >> >> > > >> How many CPU cores does it have? >> > > >> >> > > >> And how is/was it performing the role of a target repl. server >> > > >> performance wise? >> > > >> >> > > >> Regards, >> > > >> >> > > >> -- >> > > >> Sasa Drnjevic >> > > >> www.srce.unizg.hr >> > > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > > >>> >> > > >>> To me it is highly unusual to take this long. Your thoughts on >> this? >> > > >>> >> > > >>> -- >> > > >>> *Zoltan Forray* >> > > >>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator >> > > >>> Xymon Monitor Administrator >> > > >>> VMware Administrator >> > > >>> Virginia Commonwealth University >> > > >>> UCC/Office of Technology Services >> > > >>> www.ucc.vcu.edu >> > > >>> [email protected] - 804-828-4807 >> > > >>> Don't be a phishing victim - VCU and other reputable organizations >> > will >> > > >>> never use email to request that you reply with your password, >> social >> > > >>> security number or confidential personal information. For more >> > details >> > > >>> visit http://infosecurity.vcu.edu/phishing.html >> > > >>> >> > > >> >> > > > >> > > > >> > > > >> > > > -- >> > > > *Zoltan Forray* >> > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator >> > > > Xymon Monitor Administrator >> > > > VMware Administrator >> > > > Virginia Commonwealth University >> > > > UCC/Office of Technology Services >> > > > www.ucc.vcu.edu >> > > > [email protected] - 804-828-4807 >> > > > Don't be a phishing victim - VCU and other reputable organizations >> will >> > > > never use email to request that you reply with your password, social >> > > > security number or confidential personal information. For more >> details >> > > > visit http://infosecurity.vcu.edu/phishing.html >> > > > >> > > >> > >> >> >> >> -- >> *Zoltan Forray* >> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator >> Xymon Monitor Administrator >> VMware Administrator >> Virginia Commonwealth University >> UCC/Office of Technology Services >> www.ucc.vcu.edu >> [email protected] - 804-828-4807 >> Don't be a phishing victim - VCU and other reputable organizations will >> never use email to request that you reply with your password, social >> security number or confidential personal information. For more details >> visit http://infosecurity.vcu.edu/phishing.html >> > >
