A structured framework for assessing the quality of a document as a finished artefact, assessed solely on what the document reveals about itself.
Most documents get evaluated informally. Someone reads them, forms a view, and either approves them or sends them back with comments. That process works, up to a point. What it rarely produces is a consistent, repeatable assessment that separates what is genuinely strong from what merely looks polished.
The Document Quality Evaluator changes that. It provides a structured framework for assessing a document against eight defined quality dimensions, producing a scored evaluation that is honest, specific, and grounded in what the document actually contains, not in assumptions about how it was made or who made it.
The framework is designed to work without access to the development process. Whether you are the document's author reviewing your own work before sharing it, a reviewer assessing a document you have been handed, or an independent assessor with no knowledge of how it was produced, the evaluation draws solely from what the document reveals about itself.
You produced the document and want to assess it on its own terms, setting aside what you know about how it was developed. The most common use case before submission, sharing, or publication.
You have been given the document to assess. You may or may not have development context. The framework allows you to choose whether to use that context or evaluate in isolation, and records which approach was taken.
You have no development knowledge and may be assessing whether AI was involved in the document's production. In this position, the framework's AI Voice measure operates as a detection and characterisation tool rather than a quality score.
Each dimension targets a distinct aspect of quality. Every dimension is scored on a five-band scale from Insufficient to Exemplary.
Does the document communicate its own purpose, audience, and constraints, and then consistently serve them?
Are claims supported by evidence? Are specific assertions appropriately qualified where verification is not demonstrated?
Given what this document appears to set out to do, does it contain the level of analytical work that purpose requires?
Does the structure serve the reader, guiding them efficiently toward understanding or decision, or does it impose form without function?
Is the document's voice, formality, and language level consistent, coherent, and appropriate to its implied audience?
Does the document say what its evidence actually supports? Are claims proportionate, uncertainty acknowledged, and conclusions honestly drawn?
Does the document hold together as a coherent whole? Are claims consistent across sections, and does the conclusion follow from the argument made?
Does the document actually do what it sets out to do? Assessed against the purpose the document reveals about itself, does it arrive where it should?
Every evaluation begins with seven questions. They establish your relationship to the document, whether AI involvement is known, the document's type and developmental stage, what context is available beyond the document itself, and whether you want a full or light-touch review.
The answers calibrate how the framework is applied and are recorded in the evaluation report.
The framework operates in one of two modes, determined by how much context is available. In inference mode, all scores are based solely on what the document reveals. In informed mode, contextual information you provide is used alongside the document.
Both modes are rigorous. The report always declares which was applied.
Scores reflect what is actually present, not what was intended.
Adequate means the document passed a minimum threshold and nothing more. Capable is genuinely strong but not excellent. Exemplary means nothing more could reasonably be asked of the document at this standard.
Choose the evaluation route that suits your purpose. Both routes produce a scored evaluation across all eight dimensions.
Work through all eight dimensions using the structured evaluation form. Select a score for each dimension, add notes where useful, and download a formatted PDF report at the close. No prior experience of the framework is required; the form provides the structure while you bring the judgement.
Provide your document and answer the seven opening questions. The AI applies the full framework, scores each dimension against the defined criteria, and returns a complete evaluation report. Available as a conversational evaluation or as a formatted PDF record.