We are implementing new backup system from a vendor. I am wondering about the performance impact. Any idea on how I can test the backup throughput or impact on performance? What should be on my checklist? Please advice. Thank you!
Personally I think it depends on the process/method that the vendor is using, what infrastructure you have available and the size of your DB(s). We use SQLSafe by Idera (but Redgate's tools are just as good), this has a minimal impact on our production system due to the fact we use minimal compression and backups are one a separate disk array to the datafile/logfiles etc. This particular tool allows you to monitor the through put you are getting at the time you take the different types in realtime. When implementing we played about with the different setting to try and maximize the throughput, but to also minimize the performance impact on the system. This is what I would suggest to do in a QA/TEST env so you have a reasonable baseline before you add to production.
I suggest you have take a test-box and run your normal workload then run a backup at the same time. As for performance impact; you can expect the same/similar impact on performance as a native SQL Backup. You don't mention which vendor solution you have used, but they all (as far as I know) use the official backup APIs from SQL Server. Most vendors just add on-top processing like encryption or compression, but these are unlikely to impact system performance noticeably in comparison to native backups.
I use SQL Backup Pro from RedGate and find that the performance of the production system is better. The product writes the data more quickly, due to threading and compression, and that avoids contention on the disks therefore reducing the time that the backup is running and competing for live system resources. This may or may not be normal or occur with your solution. Apart from that the other answers here describe well how to test your potential effect before going live.