The assumption that validation is required for positive SEO is an example of mistaken cause and effect.
Keyword density provides an easy way to explain this. Google doesn't care about keyword density, it cares that the content on the page is relevant to the user's search. Keyword density is not really an efficient measure of relevance.
If it was, a page that was simply stuffed full of keywords would rank more highly than a well-structured page that users spend time reading.
However Google uses the bounce-rate as one of the metrics that determine relevance, meaning that if users immediately leave the page then it's deemed irrelevant to that particular search and it will be shown less frequently in future searches. Therefore naturally occurring keywords as part of good content are much more important than an artificial metric such as keyword density.
The same is true of code validation.
I once used a template by a Joomla template developer that was a very close match to the initial concepts I'd produced, so I decided that the very cool extra features justified the cost and use of the theme. Even better, it validated out of the box.
However, using the theme as a designer was an absolute nightmare - it was as if someone had taken the concept of nested-tables from the 90s and decided it was time to reproduce that horrible mess using modern code.
Even worse, the actual content was buried within dozens of nested divs and in the view of search engines was placed dead last after every other text element on the page.
So the template validated perfectly while still violating one of the core SEO rules regarding code structure.
Conversely, it only takes one minor issue that has zero effect on content structure for the site to fail validation.
Validation was never designed with SEO in mind. It's a measure of how well code stacks up against a very arbitrary set of standards. The idea was that every browser developer would stick to those standards and so validated sites would display identically across all browsers.
At this, validation fails miserably and does not live up to its purpose.
So why should we expect it to be a valid metric for SEO, a decidedly secondary purpose, when it can't even guarantee the primary outcome of its very reason for being?
By itself, it's not a valid metric. It is reasonable to say that if a site passes validation it has well-formed code. If it has well-formed code the designer may be savvy enough to structure the content placement appropriately and promote good SEO practice. It's not necessarily the case, but it's often a good assumption.
However It's not reasonable to assume that simply because a site doesn't validate that the structure can't support good SEO practice or that it's inherently bad for SEO. The code may be perfectly fine for SEO or it could be a disaster area: validation can't tell you that.
A great example of validating sites utterly failing to provide positive SEO benefit is that of the Flash splash page. Those pages validate perfectly but convey no PageRank benefit whatsoever and actively impede the user (and search engines) from reaching actual content - something which most definitely does impact on PageRank.
Sites will absolutely not be penalised on PageRank for validation issues alone. It takes much deeper problems with your content and the ability of search engines to view that content for PageRank to be affected.
Validation won't help in the slightest if your content is irrelevant or hidden behind poor information structure.
Focus on being relevant to your users and ensuring that search engines can easily traverse your site, that's how you'll build PageRank.