23 C
New York
Friday, April 24, 2026

Utilizing a Native LLM as a Zero-Shot Classifier


that “teams are remarkably clever, and are sometimes smarter than the neatest folks in them.” He was writing about decision-making, however the identical precept applies to classification: get sufficient folks to explain the identical phenomenon and a taxonomy begins to emerge, even when no two folks phrase it the identical manner. The problem is extracting that sign from the noise.

I had a number of thousand rows of free-text information and wanted to do precisely that. Every row was a brief natural-language annotation explaining why an automatic safety discovering was irrelevant, which features to make use of for a repair, or what coding practices to comply with. One particular person wrote “that is take a look at code, not deployed wherever.” One other wrote “non-production setting, secure to disregard.” A 3rd wrote “solely runs in CI/CD pipeline throughout integration assessments.” All three meant the identical factor, however no two shared greater than a phrase or two.

The taxonomy was in there. I simply wanted the best software to extract it. Conventional clustering and key phrase matching couldn’t deal with the paraphrase variation, so I attempted one thing I hadn’t seen mentioned a lot: utilizing a domestically hosted LLM as a zero-shot classifier. This weblog put up explores the way it carried out, the way it works, and a few suggestions for utilizing and deploying these methods your self.

Why conventional clustering struggles with quick free-text

Customary unsupervised clustering works by discovering mathematical proximity in some characteristic house. For lengthy paperwork, that is normally superb. Sufficient sign exists in phrase frequencies or embedding vectors to type coherent teams. However quick, semantically dense textual content breaks these assumptions in a number of particular methods.

Embedding similarity conflates totally different meanings.This secret’s solely utilized in improvement” and “This API secret’s hardcoded for comfort” produce related embeddings as a result of the vocabulary overlaps. However one is a couple of non-production setting and the opposite is about an intentional safety tradeoff. Ok-means or DBSCAN can’t distinguish them as a result of the vectors are too shut.

Subject fashions floor phrases, not ideas. Latent Dirichlet Allocation (LDA) and its variants discover phrase co-occurrence patterns. When your corpus consists of one-sentence annotations, the phrase co-occurrence sign is just too sparse to type significant matters. You get clusters outlined by “take a look at” or “code” or “safety” somewhat than coherent themes.

Regex and key phrase matching can’t deal with paraphrase variation. You may write guidelines to catch “take a look at code” and “non-production,” however you’d miss “solely used throughout CI,” “by no means deployed,” “development-only fixture,” and dozens of different phrasings that each one categorical the identical underlying thought.

The widespread thread: these strategies function on floor options (tokens, vectors, patterns) somewhat than semantic which means. For classification duties the place which means issues greater than vocabulary, you want one thing that understands language.

LLMs as zero-shot classifiers

The important thing perception is easy: as an alternative of asking an algorithm to find clusters, outline your candidate classes primarily based on area data and ask a language mannequin to categorise every entry.

This works as a result of LLMs course of semantic which means, not simply token patterns. “This secret’s solely utilized in improvement” and “Non-production setting, secure to disregard” comprise nearly no overlapping phrases, however a language mannequin understands they categorical the identical thought. This isn’t simply instinct. Chae and Davidson (2025) in contrast 10 fashions throughout zero-shot, few-shot, and fine-tuned coaching regimes and located that enormous LLMs in zero-shot mode carried out competitively with fine-tuned BERT on stance detection duties. Wang et al. (2023) discovered LLMs outperformed state-of-the-art classification strategies on three of 4 benchmark datasets utilizing zero-shot prompting alone, no labeled coaching information required.

The setup has three elements:

  • Candidate classes. A listing of mutually unique classes outlined from area data. In my case, I began with about 10 anticipated themes (take a look at code, enter validation, framework protections, non-production environments, and many others.) and expanded to twenty candidates after reviewing a pattern.
  • A classification immediate. Structured to return a class label and a quick purpose. Low temperature (0.1) for consistency. Quick max output (100 tokens) since we solely want a label, not an essay.
  • A neighborhood LLM. I used Ollama to run fashions domestically. No API prices, no information leaving my machine, and quick sufficient for 1000’s of classifications.

Right here’s the core of the classification immediate:

CLASSIFICATION_PROMPT = """

Classify this textual content into one among these themes:

{themes}

Textual content:
"{content material}"

Reply with ONLY the theme quantity and identify, and a quick purpose.
Format: THEME_NUMBER. THEME_NAME | Purpose
Classification:

"""

And the Ollama name:

response = ollama.generate(
    mannequin="gemma2",
    immediate=immediate,
    choices={
        "temperature": 0.1,  # Low temp for constant classification
        "num_predict": 100,  # Quick response, we simply want a label
    }
)

Two issues to notice. First, the temperature setting issues. At 0.7 or increased, the identical enter can produce totally different classifications throughout runs. At 0.1, the mannequin is almost deterministic, which helps easy classification. Second, limiting num_predict retains the mannequin from producing explanations you don’t want, which hurries up throughput considerably.

Constructing the pipeline

The total pipeline has three steps: preprocess, classify, analyze.

Preprocessing strips content material that provides tokens with out including classification sign. URLs, boilerplate phrases (“For extra data, see…”), and formatting artifacts all get eliminated. Frequent phrases get normalized (“false optimistic” turns into “FP,” “manufacturing” turns into “prod”) to cut back token variation. Deduplication by content material hash removes precise repeats. This step diminished my token funds by roughly 30% and made classification extra constant.

Classification runs every entry via the LLM with the candidate classes. For ~7,000 entries, this took about 45 minutes on a MacBook Professional utilizing Gemma 2 (9B parameters). I additionally examined Llama 3.2 (3B), which was quicker however barely much less exact on edge instances the place two classes have been shut. Gemma 2 dealt with ambiguous entries with noticeably higher judgment.

One sensible concern: lengthy runs can fail partway via. The pipeline saves checkpoints each 100 classifications, so you’ll be able to resume from the place you left off.

Evaluation aggregates the outcomes and generates a distribution chart. Right here’s what the output seemed like:

Distribution of Semgrep “Reminiscences” as assigned by the LLM clustering train. Picture used with permission.

The chart tells a transparent story. Over 1 / 4 of all entries described code that solely runs in non-production environments. One other 21.9% described instances the place a safety framework already handles the danger. These two classes alone account for half the dataset, which is the sort of perception that’s exhausting to extract from unstructured textual content another manner.

When this method just isn’t the best match

This method works greatest in a selected area of interest: medium-scale datasets (a whole lot to tens of 1000’s of entries), semantically complicated textual content, and conditions the place you may have sufficient area data to outline candidate classes however no labeled coaching information.

It’s not the best software when:

  • your classes are keyword-defined (simply use regex),
  • when you may have labeled coaching information (prepare a supervised classifier; it’ll be quicker and cheaper),
  • if you want sub-second latency at scale (use embeddings and a nearest-neighbor lookup),
  • or if you genuinely don’t know what classes exist. On this case, run exploratory subject modeling first to develop instinct, then change to LLM classification as soon as you’ll be able to outline classes.

The opposite constraint is throughput. Even on a quick machine, classifying one entry per fraction of a second means 7,000 entries takes near an hour. For datasets above 100,000 entries, you’ll need an API-hosted mannequin or a batching technique.

Different functions value making an attempt

The pipeline generalizes to any drawback the place you may have unstructured textual content and wish structured classes.

Buyer suggestions. NPS responses, help tickets, and survey open-ends all endure from the identical drawback: different phrasing for a finite set of underlying themes. “Your app crashes each time I open settings” and “Settings web page is damaged on iOS” are the identical class, however key phrase matching gained’t catch that.

Bug report triage. Free-text bug descriptions may be auto-categorized by element, root trigger, or severity. That is particularly helpful when the particular person submitting the bug doesn’t know which element is accountable.

Code intent classification. That is one I haven’t tried but however discover compelling: classifying code snippets, Semgrep guidelines, or configuration guidelines by objective (authentication, information entry, error dealing with, logging). The identical approach applies. Outline the classes, write a classification immediate, run the corpus via an area mannequin.

Getting began

The pipeline is easy: outline your classes, write a classification immediate, and run your information via an area mannequin.

The toughest half isn’t the code. It’s defining classes which might be mutually unique and collectively exhaustive. My recommendation: begin with a pattern of 100 entries, classify them manually, discover which classes you retain reaching for, and use these as your candidate checklist. Then let the LLM scale the sample.

I used this method as a part of a bigger evaluation on how safety groups remediate vulnerabilities. The classification outcomes helped floor which kinds of safety context are most typical throughout organizations, and the chart above is likely one of the outputs from that work. When you’re within the safety angle, the total report is obtainable at that hyperlink.

Related Articles

Latest Articles