I am designing a database model based on an existing database that currently has 268 tables. These tables are all in 1 database.
The new model will have probably 300+ tables. We are discussing whether to keep all the tables in 1 database or separate them into multiple databases. We lose some foreign key constraints if we go multiple but that is a lot of tables to put in 1 DB.
So -- does anyone have experience with this or opinions? I can see pros and cons on both sides.
asked Dec 11 '09 at 12:36 PM in Default
I believe the most important question here is whether or not the tables belong together. Are they related? My general approach toward all development efforts is to try to maintain highly cohesive, loosely coupled components. If tables are related using foreign keys, for instance, they are tightly coupled and therefore should probably stay together in the same database. If, however, 2 tables are for different purposes (perhaps transactions vs reporting), then separate databases might make more sense.
What are you trying to achieve with multiple databases? If the goal is performance via multiple files, then perhaps multiple filegroups and files for the same database would work better.
SQL Server does not enforce a hard limit on the number of tables per database. The limit is on the total number of objects per database. On my last project, we had 1 database with over 200 tables plus over 100 stored procedures and dozens of views and functions. We then had other databases as well for separate functions.
answered Dec 11 '09 at 12:54 PM
I agree with Tom, if the functionality is for one application, put it all in one database. Conversely, if you have seperate applications, please, seperate them into different databases. You can rely on methods within SQL Server to seperate out security or storage within a single database such that you can be sure that users only have access to the appropriate schema or large, active tables are stored on different files or file groups than less active tables... All kinds of things you can do there without having to resort to seperating your databases.
Worrying about referential integrity is only the tip of the iceberg. What about ensuring that restores, should they occur, are precisely synchronized? Point in time recovery should take care of it, but are you configuring the system for that? If not, you might be in trouble. There are more issues, but you get the idea.
answered Dec 11 '09 at 01:00 PM
Grant Fritchey ♦♦
I also agree with Tom and wants to give some more things to consider.
If you have a lot of large tables with a lot of indexes it can be time consuming to rebuild all indexes with a "maintenance plan" and recommend you to create a script to rebuild only indexes that are fragmented.
If you have a lot of objects in the same schema it can be tricky to handle permissions to all the objects, but in SQL server 2005 and later you can group objects togehter in different schemas based on their functional area like sales, products etc. In SQL versions prior to SQL 2005, the schema handled the ownership of the objects, but later versions are more like containers
Quoute from books online:
answered Dec 11 '09 at 02:58 PM
Just adding to Tom's comments..
If you are to use multiple filegroups you will only get real benefits if the ndf's are one seperate disk arrays.
Im my current prod database we have 455 tables (it does host 3 different applications - bad i know) and I dont have any issues with performance.
20 of the high transaction table on a seperate filegroup so IO isnt affected.
answered Dec 11 '09 at 01:03 PM