Is there any differnce between the Query performance among "Select count(*) from tblname where field1 IS NULL" and "Select Count(clusteredIndexKey Column) from tblname where field1 IS NULL"?
asked Jan 11, 2011 at 10:43 PM in Default
So apparently Count Function will look for the Number of Row Identifiers?
( Irrespective of the indexes or their types i assume)
So even if we have a ROW with all Columns having NULL Values, it would be counted as an entity using Either Count(*) and Count(1).
When it Comes to Count(Column_NAME) it looks for the NUMBER OF DATA Offsets defined all along the table Extents.Some one please correct me if i am wrong?
answered Mar 12, 2013 at 04:30 PM
While there is no difference in performance, using the count(clustered_index_column) instead of count(*) or count(1) may easily lead to unwanted results and therefore, should be avoided at all costs if at all possible. This is simply because typically, the queries like select count(clustered_index_column) are intended to figure the number of records in the table, but unfortunately, there is no way to guarantee that such query will return an accurate row count, so the purpose of the query is defeated. Thus, in my humble opinion, such queries should be avoided and replaced with select count(*). The reason for a possibility of inaccurate count is that the record with NULL in the column will be excluded from the count, and create unique clustered index statement does not require the column to be not nullable. Here is the small example to demonstrate the point I am trying to make:
Executing the sample query above produces the following results,
clearly revealing that the attempt to use count(column_name) to retrieve the accurate number of records in the table is futile despite the fact that the column is the only column of the unique clustered index. I understand that there might be an argument that usually, the column of such index is not nullable, but it does not matter. Since the possibility of getting inaccurate count exists, it is perfectly fine to use reliable count(*) or count(1) to get the desired result.
Just my 2 cents.
Yes there is a difference.
If you take this example:
If you look at the estimated or actual query plan you see exactly the same thing - Index Scan 85%, Stream Aggregate 15%, Compute Scalar 0%, Select 0% So you might think they are the same but if you execute
Not a huge difference but the index being used saves accessing the table rows and is therefore faster and has less impact on the server in terms of disk access and potentially holding up other processes.
answered Jan 12, 2011 at 12:55 AM
The only real way of knowing is to test and take a look at the resulting execution plan and resource statistics.
In general, doing the
In most cases, you don't want to tell the optimiser how to fulfill your request, you just tell it what you want to know. This is sometimes difficult to grasp, as programmers normally think in the other direction (tell the computer what to do). The query optimiser is a very clever and very powerful tool, provided with good information (indexes, statistics) it will find the best way of answering your questions.
The execution plan will depend on your table structure, available indexes and statistics and data spread (you don't say how many rows of the table match
answered Jan 11, 2011 at 11:49 PM
I don't see any performance difference unless your query missing any useful indexes, because Most of the time Query analyzer will choose the best indexes available and query plan to execute the query. I believe both query will end up on 'scan' on a Same index.Prefer the Index Key Column, that is the standard.