I have come across an issue that doesn't appear to be well documented, well for my particular case anyway. We have our production AG on SAN storage with 512 byte physical sectors. When restoring to our Azure VMs (4k physical sectors) we are experiencing really bad delays and I've found messages in the log saying:
"There have been X misaligned log IOs which required falling back to synchronous IO. The current IO is on file xxx."
I had a database restore to 100% after 50 minutes and then take a further 90 minutes to complete and it seems to be down to the rewriting of the logs to the new sector size. All research points to this occurring between primary and secondary AG servers where the sector size differs but my thinking is that the logic would still apply when you restore a database to another server?
Another thing I have noticed is that the delay in the restore is worse if 2 databases are restoring at the same time. It can literally go from an ETA of 2 hours to 48 hours! (Note that we only recently moved our production to the SAN so the order and scheduling of the restores worked fine prior to this). When it says its going into synchronous mode, does that mean its doing it for that restore.... or for all restores occurring at that point in time?
Another test I done that seems to prove my theory is that I restored a backup to a 4096 server and as expected the misaligned log IO errors were shown. I then took a backup of it and restored again and this time no misaligned log IO errors.
Re-configuring the drives on either end is not an option so enabling Trace Flag 1800 seems to be the solution but I have no test environment of 512 and I am nervous enabling it on our production cluster. Has anyone had any experience with enabling this? My fear is that it will start rewriting all the logs to the new sector size and take ages to come back up. These are 50 24/7 critical databases in an AG some of which are over 1TB, so I would really love to hear from anyone that has had experience with enabling it or even better of someone can confirm that my theory is actually correct.
All servers on SQL Server 2017. Production on CU 18 and Azure servers vary from RTM to CU20.
Thanks in advance
Answer by Ange ·
Thank you for getting back to me.
We don't actually have any issue on the AG at all so switching to asynchronous commits will not help but thank you for the suggestion. The issue only occurs when restoring the database to a 4k sector disk when the backup was taken on 512byte disk.
I actually raised a MS support call for this and they have confirmed that the same issue can happen when restoring a database to a server of a different physical sector size and have advised to enable trace flag 1800.
We need to schedule a maintenance window to do this so once that's enabled I will post back here the results. Might be a week or two.
Answer by JohnM ·
This particular error shouldn't be involved with the restore process. The restore process is just laying down the pages and at this point, the database isn't even involved with an AG or log shipping unless I'm misunderstanding something.
What are the Azure VM sizes? How are the drives configured for throughput? What type of drives are they?
Implementing the trace flag shouldn't rewrite the all the logs rather it will direct SQL to change how it's doing the IO on all forth coming IO for any AG. You should be able to test it with Azure VM where the disk is formatted to a 512 sector size and then you put SQL Server Developer on top of it, do some type of workload, enable the TF, and see what it might affect.
The error indicates that when SQL Server is doing the IO it switches to synchronously vs asynchronously for that writing that particular IO for the log stream coming from the primary. It occurs because the replication stream for the log is coming in as configured for 512 sector sizes but the disk is 4k. As you've discovered, it can play havoc with secondary's and the queues will probably fill up. You can look at switch the AGs to asynchronous commits to elevate performance issues on the primary but be aware how that affects your RPO/RTO metrics. Once the AGs have caught up then flip them back to synchronous.
Lots of factors in here sooooo hope that helps! =)