I have a database where an automated system creates tables for a data transfer process and then never uses them again. I assume that it should also remove these tables but it would appear that it has stopped doing the tidy up step. As a result I have a few tables that are totally empty and I have the go ahead from the vendor to drop them. Does anyone have any experience of dropping a lot of tables from a database and whether it will bend the server out of shape at all?
I think I am going to do them in batches and I will try it in test first, I am just really wondering if anyone else has ever had to drop 1662 tables from a database before?!?!
asked Apr 04 '11 at 02:48 AM in Default
If you aren't querying them they will not be in cache, so that should not be affected.
You may see internal fragmentation of the data file(s) where these tables resided, but that would be expected. You may consider cleaning that up, but it shouldn't be a problem to begin with.
I have dropped about 50 tables in a go, plus indexes that belonged to them. The drop, as long as constraints etc. are not there, should be pretty much instantaneous.
As the tables are empty, the effects should be minimal though. Go for it on test, then prod - it'll be fine.
answered Apr 04 '11 at 02:58 AM
I've create a script which creates 1662 create table statements (all with one identity column and nothing more) and one script which creates drop table statements for them all.
Running on my crappy laptop, they both run in less then five seconds. I guess there's some more overhead to it if you have wider tables, but this at least shows that the magnitude is not that high.
answered Apr 04 '11 at 03:06 AM