Ajit, if the data files are running out of space you need to make sure that they can grow further to accept more data (manually extending them at a time of low system usage is better than just letting autogrowth take care of this). Another possibility is to add a new data file to the file group, but you still need disk space for this to work (just like expanding your existing data files). There is also the option of deleting unneeded data from your database to free up space in the data file(s), that would be the cheapest option, but maybe not a viable one. Also try looking at index usage too, you may have a huge pile of indexes that are taking up space that are never used. This query would give you an idea of indexes that are not used much/at all and wouuld be candidates for being dropped: -- SQL 2005 and above SELECT o.name, indexname = i.name, i.index_id, reads = user_seeks + user_scans + user_lookups, writes = user_updates, rows = (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id), CASE WHEN s.user_updates < 1 THEN 100 ELSE 1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates END AS reads_per_write, 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(c.name) + '.' + QUOTENAME(OBJECT_NAME(s.object_id)) AS 'drop statement' FROM sys.dm_db_index_usage_stats s INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.object_id = i.object_id INNER JOIN sys.objects o ON s.object_id = o.object_id INNER JOIN sys.schemas c ON o.schema_id = c.schema_id WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1 AND s.database_id = DB_ID() AND i.type_desc = 'nonclustered' AND i.is_primary_key = 0 AND i.is_unique_constraint = 0 AND (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000 ORDER BY CASE WHEN s.user_updates < 1 THEN 100 ELSE 1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates END Anythign with a low read per write ratio is a candidate, but beware that these statistics are reset to zzero when the sQL Server is restarted so the readings could be misleading. Analyse the output of this query and ***think*** about the data before doing anything.
Just another thing to look at, if you have indexes in the file group that are heavily fragmented, then you can end up with huge amounts of wasted space. I have seen indexes taking up 2GB, that, when re-indexed, shrank to 40MB.