Some suggestions: * Check that you can recover the database if it all goes wrong. * Check your documentation of the database reflects the reality. * Check that you can recover the database if it all goes wrong. * Check that, where you're granting access to the database, you're granting it on the principle of "least required permissions" - ie not granting sysadmin rights when the application only needs to read data. Yes, I know I repeated one, but it's rather important.
My answer goes beyond just the database, but I break down system requirements and testing into a few levels (ranked from what I consider most important to least): 1. Requirements documentation. Does the end result meet the requirements? If not, stop here. 2. Data integrity. This is what you've focused on so far. If the database doesn't store the correct data, nothing else matters. 3. Security. Depending on the customer and application, this might sometimes be a lower priority, but I generally list it here. 4. Stability. Ensuring the application is operational is obviously important. This step requires more communication with the operations support team because it can involve things like redundancy as well as recovery (for when stability fails). 5. Performance. What are the customer expectations? Are they being met? Are they reasonable? Use standard tools and metrics. 6. Happy to Glad. These are lesser (and often undocumented) requirements. Is the background color shade correct? Do you like the text font? Don't assume the initial test values will be true in production and/or over time. Establish a testing plan to ensure all of your goals are continuously met.
Load the database with data and run your application against that. Make darned sure you capture performance and query metrics so that you know how the system behaves. Way too many testing regimes use little to no data and then are shocked when the 50million rows they knew were coming in production actually causes problems. Take your anticipated data load and then double or triple it. You should also look to Distributed Replay, a tool within SQL Server, as a mechanism for capturing tests from your app or even from production, and then using that as a way to run more tests. Basically, you need functional tests (the most common and the least missed), performance tests (validation of performance), and load tests (lots of data, lots of transactions, similar to, but separate from performance, validates when things are going to break). This is what I set up for systems that I know I have to be able to ensure will work when they hit production.