Is there a tried and true methodology that provides for a copy/mirror/replicant of a dB on a seperate server that enables the backup process to be performed there rather than on the Primary?
Ideally I am looking for a solution that can be performed in different locations, or at least on different servers, without requiring a complete reinitialization of the database each time. For instance, with Log Shipping I can restore the secondary and perform the backup. The drawback is that I then have to grab a new base backup of the Primary and start Log Shipping up again. This defeats the purpose of off-loading the backup.
To aide Steve's answer. There are two issues I am trying to resolve. 1) Drve I/O contention during backups on the Primary. 2) Creating ad-hoc, up to date, Clones of the Production dB in a geographically removed environment for development purposes where the bandwidth is not sufficient to copy full dB backups.
We have previously used replication for this, and then done backups from the replica. However, you have to be absolutely sure that your replication is working 100%, and check it regularly, because a backup is only as good as the data it's backing up, and if the replication is wrong, then it's not going to be much use.
answered Oct 19, 2009 at 10:58 AM
Matt Whitfield ♦♦
You have to make a copy of the data. A backup does that. If you were to log ship and make the backup on the secondary, you haven't saved anything on the primary. It still did the backup and network copy.
If you replicated all tables, you could avoid this by backing up the subscriber, but you aren't assured of not losing data. If you can potentially stand some data loss (between the commit on the publisher and sending it to the seconday), that might meet your needs.
What are you trying to avoid on the primary? Can you edit the question to say what makes the most sense.
answered Oct 19, 2009 at 10:37 AM
Steve Jones - Editor ♦♦
We use Doubletake to take a care of a similar situation, production backup's might need some more consideration however. We primarily use it for testing out reports and secondarily for off premise disaster recovery.
It was relatively simple , we setup a new SQL instance off premise and then set double take to replicate the mdf/ldf file to that server( it uses a network filter to capture the data, not sure of the i/o load). Once full synched we would then copy the live double-take'd mdf/ldf file and copy them to a separate location. Then take the db off-line, replace the double-take MDF/ldf files , then bring the DB back on-line. This was done through a bat file and a sql job.
This solution has worked relatively flawless for about a year and a half now , much to the surprise of a lot of people. Once the db is on-line you could take a backup. Certainly log shipping or mirroring is more reliable but this has worked well for us, with little I/O impact to the production server.
answered Oct 19, 2009 at 11:02 PM