AI firms use mannequin specs to outline goal behaviors throughout coaching and analysis. Do present specs state the meant behaviors with sufficient precision, and do frontier fashions exhibit distinct behavioral profiles underneath the identical spec? A staff of researchers from Anthropic, Considering Machines Lab and Constellation current a scientific methodology that stress exams mannequin specs utilizing worth tradeoff situations, then quantifies cross mannequin disagreement as a sign of gaps or contradictions within the spec. The analysis staff analyzed 12 frontier LLMs from Anthropic, OpenAI, Google, and xAI and hyperlinks excessive disagreement to specification violations, lacking steering on response high quality, and evaluator ambiguity. The staff additionally launched a public dataset
Mannequin specs are the written guidelines that alignment techniques attempt to implement. If a spec is full and exact, fashions educated to comply with it shouldn’t diverge extensively on the identical enter. The analysis staff operationalizes this instinct. It generates greater than 300,000 situations that power a selection between two reliable values, equivalent to social fairness and enterprise effectiveness. It then scores responses on a 0 to six spectrum utilizing worth spectrum rubrics and measures disagreement as the usual deviation throughout fashions. Excessive disagreement localizes the spec clauses that want clarification or extra examples.

So, what’s the methodology used on this analysis?
The analysis staff begins from a taxonomy of three,307 high-quality grained values noticed in pure Claude site visitors, which is extra granular than typical mannequin specs. For every pair of values, they generate a impartial question and two biased variants that lean towards one worth. They construct worth spectrum rubrics that map positions from 0, which suggests strongly opposing the worth, to six, which suggests strongly favoring the worth. They classify responses from 12 fashions in opposition to these rubrics and outline disagreement as the utmost commonplace deviation throughout the 2 worth dimensions. To take away close to duplicates whereas preserving the arduous circumstances, they use a disagreement weighted ok middle choice with Gemini embeddings and a 2 approximation grasping algorithm.


Scale and releases
The dataset on Hugging Face reveals three subsets. The default break up has about 132,000 rows, the entire break up has about 411,000 rows, and the choose evaluations break up has about 24,600 rows. The cardboard lists modality, format as parquet, and license as Apache 2.0.
Understanding the Outcomes
Disagreement predicts spec violations: Testing 5 OpenAI fashions in opposition to the general public OpenAI mannequin spec, excessive disagreement situations have 5 to 13 instances greater frequent non compliance. The analysis staff interprets the sample as proof of contradictions and ambiguities within the spec textual content moderately than idiosyncrasies of a single mannequin.
Specs lack granularity on high quality contained in the secure area: Some situations produce responses that every one go compliance, but differ in helpfulness. As an example, one mannequin refuses and gives secure options, whereas one other solely refuses. The spec accepts each, which signifies lacking steering on high quality requirements.
Evaluator fashions disagree on compliance: Three LLM judges, Claude 4 Sonnet, o3, and Gemini 2.5 Professional, present solely average settlement with Fleiss Kappa close to 0.42. The weblog attributes conflicts to interpretive variations equivalent to conscientious pushback versus transformation exceptions.


Supplier stage character patterns: Aggregating excessive disagreement situations reveals constant worth preferences. Claude fashions prioritize moral duty and mental integrity and objectivity. OpenAI fashions are likely to favor effectivity and useful resource optimization. Gemini 2.5 Professional and Grok extra usually emphasize emotional depth and genuine connection. Different values, equivalent to enterprise effectiveness, private development and wellbeing, and social fairness and justice, present combined patterns throughout suppliers.


Refusals and false positives: The evaluation reveals matter delicate refusal spikes. It paperwork false constructive refusals, together with reliable artificial biology research plans and commonplace Rust unsafe varieties which can be usually secure in context. Claude fashions are essentially the most cautious by price of refusal and sometimes present different options, and o3 most frequently points direct refusals with out elaboration. All fashions present excessive refusal charges on baby grooming dangers.


Outliers reveal misalignment and over conservatism: Grok 4 and Claude 3.5 Sonnet produce essentially the most outlier responses, however for various causes. Grok is extra permissive on requests that others contemplate dangerous. Claude 3.5 generally over rejects benign content material. Outlier mining is a helpful lens for finding each security gaps and extreme filtering.


Key Takeaways
- Technique and scale: The research stress-tests mannequin specs utilizing value-tradeoff situations generated from a 3,307-value taxonomy, producing 300,000+ situations and evaluating 12 frontier LLMs throughout Anthropic, OpenAI, Google, and xAI.
- Disagreement ⇒ spec issues: Excessive cross-model disagreement strongly predicts points in specs, together with contradictions and protection gaps. In exams in opposition to the OpenAI mannequin spec, high-disagreement gadgets present 5 to 13× greater frequent non-compliance.
- Public launch: The staff launched a dataset for impartial auditing and copy.
- Supplier-level conduct: Aggregated outcomes reveal systematic worth preferences, for instance Claude prioritizes moral duty, Gemini emphasizes emotional depth, whereas OpenAI and Grok optimize for effectivity. Some values, equivalent to enterprise effectiveness and social fairness and justice, present combined patterns.
- Refusals and outliers: Excessive-disagreement slices expose each false-positive refusals on benign matters and permissive responses on dangerous ones. Outlier evaluation identifies circumstances the place one mannequin diverges from at the very least 9 of the opposite 11, helpful for pinpointing misalignment and over-conservatism.
This analysis turns disagreement right into a measurable diagnostic for spec high quality, not a vibe. The analysis staff generates 300,000 plus worth commerce off situations, scores responses on a 0 to six rubric, then makes use of cross mannequin commonplace deviation to find specification gaps. Excessive disagreement predicts frequent non compliance by 5 to 13 instances underneath the OpenAI mannequin spec. Decide fashions present solely average settlement, Fleiss Kappa close to 0.42, which exposes interpretive ambiguity. Supplier stage worth patterns are clear, Claude favors moral duty, OpenAI favors effectivity and useful resource optimization, Gemini and Grok emphasize emotional depth and genuine connection. The dataset permits copy. Deploy this to debug specs earlier than deployment, not after.
Try the Paper, Dataset, and Technical particulars. Be at liberty to take a look at our GitHub Web page for Tutorials, Codes and Notebooks. Additionally, be at liberty to comply with us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you possibly can be a part of us on telegram as effectively.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.
