Thomas brings up some great points. I would also like to point out that the percentage of data being returned from a table or the amount of data a scan can be better than a seek. Also proper updated statistics is key to helping the optimizer build good enough execution plans. For example if you are joining two tables and one has 10k records and you need 9k then it can be much faster to do one scan to get the 9k than do 9k lookups on a seek. Also, if you have a very small amount of data in the table there really is no difference between a scan or seek. Think of small lookup tables like states in the US. I would recommend taking a good look at this blog post by Thomas LaRock. It goes over a great strategy for learning how to do indexing where its needed for a query.
Looking at the execution plan it looks like the plan is estimating that your scan is going to need 415 million rows. A scan makes sense here because you most likely are past the tipping point where a seek could improve performance. I would take a look at your filters for that table to just make sure that there is an index on them that also covers the columns needed for selectivity. If you want attach the actual XML plan and the list of indexes for that table and I would be happy to take a quick look for you.