Hi there, I have a table that holds JSON data of varying lengths with a data type of NVARCHAR(MAX). Storing this column within the table itself causes the query on the table to run exceptionally slowly. Is it possible to store this JSON data externally to be handled by the file system? I know that FILESTREAM allows VARBINARY(MAX) to be stored externally however I don't know how best to handle this large NVARCHAR(MAX) JSON column to speed up the queries on this table. TIA -- Edit to include example [2017-06-23] CREATE TABLE [dbo].[TestJSON]( [ID] [int] NOT NULL, [ROWGUID] [uniqueidentifier] NOT NULL, [JSONDATA] [varbinary](max) FILESTREAM NULL, [ZIPDATA] [varbinary](max) FILESTREAM NULL, [JSON] AS (CONVERT([nvarchar](max),[JSONDATA])) PERSISTED, [Submitted] [datetime] NOT NULL) SELECT TOP 1 ID, ROWGUID, JSONDATA, ZIPDATA FROM TestJSON WHERE ID = 32 ORDER BY Submitted DESC -- executes in 1 second SELECT TOP 1 ID, ROWGUID, JSONDATA, ZIPDATA, JSON FROM TestJSON WHERE ID = 32 ORDER BY Submitted DESC -- executes in 1 hour 12 minutes 3 seconds JSON is persisted computed column. I doesn't matter if it's persisted or not. It still takes 1+ hours to complete. Also the JSON data is too large to provide sample data here as it's 12MB in size. Both query plans are almost identical with identical cost and both use the clustered index (cost ~26%) and a sort (~73%). Including the JSON column, the query plan includes Filter, Compute Scalar but it's cost is ~0%. ![![alt text]] : /storage/temp/4157-capture.jpg
ID is the clustered index. No idea what a push down predicate is. Can you expand? JSON is a NVARCHAR((MAX) is that not too large to take advantage of indexing? Can you provide an example? I still believe that the problem lies with the large JSON column being processed through the SQL engine. Persisted or not it is sloooow.