The challenge CSP faces in mitigating XSS vulnerabilities can be (over)simplified as follows:
How on earth can we tell the difference between the legal and illegal parts of a document, if the application code failed to distinguish them in the first place?
There are three distinct strategies to establish trust, each based on different assumptions and valid to varying degrees:
Whitelisted sources: Where - where should we retrieve the scripts? If the scripts of a page are from a predefined set of "Trusted Origins", then they are trusted as well.
Hash: What - what is EXACTLY in these scripts? If the scripts contain only code that matches the predefined hashes, they should do no harm.
Nonce: How/When - how/when were the script elements built? If the script elements contain an unpredictable secret only the legitimate application knows when it constructs the document, the scripts with such a secret should also be considered legitimate.