It depends. If you have inefficent plans cached then forcing new plans to be created may help the stored procedure the next time it runs, but then that plan might not always be the most effient as well. It is a vicous cycle. I have had situations where freeing the proc cache was the answer, but those are more the exception than the rule. Is your instance setup for optimize for adhoc workload? Is it a reporting server, data warehouse or OLTP?
I am sure others will weigh in on this. It is late and I am working on a production issue so I can't give this the attention it deserves right now.
answered Sep 26 '12 at 05:13 AM
Those two statements are a method of cleaning out the memory on your system. In general, this doesn't speed things up. Instead it slows things down since any pages that have been read off of disk and are now residing in memory have to be re-read from disk. It also slows things down since all execution plans in cache have to be recompiled. Neither of these things is considered to be a way to speed things up.
But, if you have plans that are based off of parameter values that are from skewed data, those plans might be off and recompiling them can speed things up. But, dropping all plans for the entire system isn't the way to solve that issue. Instead you should identify those plans that are problematic and then use DBCC FREEPROCCACHE() to drop the individual problematic plans. You don't want to drop everything.
I do use these statements when I'm testing memory configurations or new queries and I'm curious about the disk reads as well as the compile time and compiled plans. But I never run them on a production system as a way to speed things up or fix things. I would consider it a very poor practice.
answered Sep 26 '12 at 09:09 AM
Grant Fritchey ♦♦