I have 34 nearly identical SELECT statements that follow this pattern: SELECT x.* FROM MySchema.SomeTable x INNER JOIN OneSchema.OneTable t ON t.TableId =
x.Id ; The "SomeTable" varies for the 34 tables, but "OneTable" is the same in all queries. This generates a parallel execution plan for tables that do not have any check constraints, but it does not for the others. The NonParallelPlanReason is CouldNotGenerateValidParallelPlan. All source tables have "Id" as the clustered index. This is not just a curiosity. For the tables for which the parallel plan is chosen, execution time is typically under 30 seconds. For the others, nothing returns even if I wait for several hours. In fact, the number of reads performed during that time far exceeded what should be required for multiple full table scans. Thanks in advance for any help. UPDATE: Perhaps the check constraints are a red herring. I'm not sure, but the "OneTable" was actually a view that includes a ROW_NUMBER partition function which serializes at least that part of the plan. I'm now wondering if the decision is at least partially based on the estimated size of the result set. I modified the code to use the view to populate a table and then have the other queries all join to that table. I don't know why I didn't think of that before because, even without this latest issue, it's obviously more efficient to calculate the view once at the beginning instead of 34 times later. All of this runs within a single SSIS package. After the change, the package executes faster than ever before. Although that solves my immediate problem, it doesn't answer my original question or my new one regarding table sizes being the potential culprit. Therefore, I'll leave this open a bit longer to see if anyone has any thoughts.
The optimizer absolutely does take check constraints into account. I have a demo that shows it directly affecting the query plan when different WHERE clauses are affected or not affected by the data within the constraint. So, yes, without seeing the query or the plan in question, it could be affected by the check constraint. From what you're saying though, I think it sounds more like statistics and/or cardinality and cardinality estimates. It may also be the cost threshold for parallelism is too high, or too low. Hard to say without details.