Hi, We have developed an In-house Productivity Monitoring tool on our organization. Objective is, a client tool will be installed on all the users' desktop/Laptop, which will track all the activities of users and send the details to SQL Server database. There is an User Interface developed to see the details like "who are spending more time on social websites", "which team is more productive"...etc. Details can be viewed for entire organization, or a particular team, and can drill down to a single user. Problem is, there are around 200000 records inserted per day(100 users * 2000 rows/day) on a single table. It takes huge time to pull a report through our tool(even minutes), but at the same time, we tested some third party productivity Monitoring tool which provides the same report in fraction of seconds(irrespective of the volume of data we are fetching). The in-House Monitoring Tool and Third party application both are hosted on same server. Only difference is our application uses SQL Server 2016 and Third party tool uses MY SQL. Are we missing anything, how can we improve the performance of our application same as third party application. Note:- search condition for the reports might be anything or all of following: User, Team, Organization, date range, application type(social websites, E-Commerce website, business applications...etc). We have indexed all the search criteria.
You may have indexed all the search criteria but you may not have created indexes that the Query Optimiser needs. It's quite a few rows but a query shouldn't take minutes unless there are many millions of rows. You could try asking SQL Server if there are indexes it would have liked. It's not perfect but it often shows me indexes that, after checking, do help query performance. Try this query and review the suggestions carefully: SELECT migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure, 'CREATE INDEX [missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle) + '_' + LEFT (PARSENAME(mid.statement, 1), 32) + ']' + ' ON ' + mid.statement + ' (' + ISNULL (mid.equality_columns,'') + CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (mid.inequality_columns, '') + ')' + ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement, migs.*, mid.database_id, mid.[object_id] FROM sys.dm_db_missing_index_groups mig INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10 ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC Sometimes it suggests an index you already have or maybe with just one more column included. Don't just add everything it says. If you're lucky that might fix the problem but if not you need to capture the queries that are slow, get their query plan and then see what's wrong. You could run the SQL Server Profiler while you operate the app to run the slow queries. Use the Profiler's filters to just show queries that take more than 30 seconds. Stop the Profiler and find the slow queries. Look at the session ID and get the bit from the start of the trace that shows that session's settings. Open a new SSMS query window on the right server and database. Copy those settings statements and the query statement into the query window. Insert this at the top: `set statistics io, time on;` You should have something like this now: /* --OLD -- network protocol: TCP/IP set quoted_identifier on set arithabort on set numeric_roundabort off set ansi_warnings on set ansi_padding on set ansi_nulls on set concat_null_yields_null on set cursor_close_on_commit off set implicit_transactions off set language British set dateformat dmy set datefirst 1 set transaction isolation level read committed set statistics io on; set statistics time on; */ exec dbo.SlowStoredProcedure @SupplierID=null , @FromDate='01-NOV-2015 12:00:00' , @ToDate='30-NOV-2015 11:59:00' , @CurrencyID=1; Select the code from just after /* up to just before */ and run it. That will set your connection's settings to the same as those used by the session that had the problem. Go to the Query menu and select Include Actual Execution Plan. Run the query. When the query has executed the Messages tab will show details of how long things took and how many reads occurred on each table. If you're struggling to make sense of it, try pasting it into
http://statisticsparser.com/. The execution Plan tab will show loads of details about the execution of the query. Look for danger signs such as the thickest lines, differences between the estimated and actual numbers or rows, row lookups. The plan may suggest a missing index in green at the top. That's worth looking at. If you're not getting anywhere try installing the excellent free tool SQL Sentry Plan Explorer and looking at the query in that. (There's no catch - you don't even have to give them your email address.) After you've installed it, get the query plan up again and right click on it, selecting View with SQL Sentry Plan Explorer. If you're still stuck there's a great free book about query tuning:
http://www.red-gate.com/library/sql-server-execution-plans-2nd-edition It's a big subject but I hope this helps a bit.