Adding campus leaders · India

We need leaders whotrain real LLMs from campus.

Rank outputs, label what helps or harms, steer models toward answers people want. You bring your college into that loop—friends, clubs, your batch—so the work stays human and honest.

In one line

Add your campus to the graph of people who actually train the LLMs the world uses.

Adding campus leaders

Train AI from your college.

We need people on the ground who can plug their campus into live LLM work—not someday, but now. You are the link between your friends and the feedback that makes models safer and more useful for everyone.

Apply as a campus leader

Your college, on the map

Hostels, clubs, labs, batch groups—wherever people already gather.

Real LLM tasks

Preference rankings, helpfulness labels, policy calls—signal that actually trains models.

You own the vibe

No formal cohort launches—we give briefs and support; you grow it your way.

0+

LLM task lanes

Real queues—rankings, labels, safety notes—that ship into training and eval

₹200–₹500/day

Freelance bands

Typical range for consistent, high-quality work—lane and rubrics decide the rest

0×/mo

Leader sync

Check-ins with 2une plus paths to research leaders from partner AI labs

Honest feedback beats synthetic data. Your job as a leader is to keep that bar high where you study.

Named stack

Three surfaces between campus signal and model training

We name the layers so teams know what moves where: capture, preference formation, and trainer-ready export.

Lattice

ingest_graph

Multimodal intake & topology

Shards land as structured units—text spans, audio clips, UI traces—with schema hooks so downstream jobs know provenance, locale, and consent scope.

Meridian

signal_path

Human preference & ranking planes

Pairwise choices, rubric-scored generations, and safety notes become preference tensors: the raw material for reward models and DPO-style alignment.

Foundry

trainer_handshake

Artifacts for SFT & eval

Curated JSONL, parquet-backed features, and eval harness seeds ship to training—not a dump folder, but versioned drops with QA gates.

Data → model

Pipeline your campus feeds into

Your campus supplies the honest judgments—rankings, labels, safety calls—that turn raw prompts into training-grade data for real LLMs. Ambassadors recruit and coach people on those steps so the signal stays clean.

raw

Raw corpus

Prompts, completions, tool traces, multimodal payloads—ingested with metadata and policy tags.

annotate

Human annotation

Guidelines-backed labeling: spans, classifications, pairwise preferences, red-team probes.

qa

QA & adjudication

Inter-rater agreement, gold sets, and escalation queues before anything is promotion-ready.

export

Trainer export

Versioned drops for SFT pairs, RM training, and offline eval—wired to your MLOps contract.

For leaders

How it works on your campus

Lightweight rhythm: recruit people you trust, keep the bar high, stay in sync with us—without running a formal ambassador “cohort program.”

  • 01

    Apply & scope

    Tell us your college, networks, and how people already find you—we match task lanes, tooling, and expectations.

  • 02

    Grow your circle

    Invite through societies, batch chats, and friends. We share talking points and assets; you grow it organically—no mandatory cohort kickoffs.

  • 03

    Keep quality high

    Watch rubric drift, answer questions fast, escalate edge cases—you are the local owner of honest, consistent feedback.

  • 04

    Unlock more lanes

    Strong campuses get access to richer task types and lab touchpoints; we reward clean work, not noise.

Roles

Who sits on the campus graph

Clear hats: leads operate the program; contributors generate signal; partners lend distribution and credibility.

lead

Chapter leads

Recruit, run sessions, and keep feedback honest. Top leads unlock facetime with research directors, freelance pay, swag, live meetups, and internship-style certificates from partner labs where available.

labeler

Contributors

Students executing tasks: pairwise rankings, span labels, policy annotations—paid, async-first, rubric-bound.

partner

Clubs & labs

Official partners for workshops on eval design, data ethics, and tooling—aligned to real queue economics.

network

Trusted subgraphs

Dorms, batches, societies: dense graphs reduce cold-start friction and improve guideline adherence.

Lexicon

Language we use on the floor

You do not need a PhD—but you should recognize the vocabulary your contributors will hear in tooling and briefings.

RLHF / preference optimization
Learning from ranked outputs or scalar rewards—your chapter supplies the comparisons trainers need.
SFT pairs
Instruction–response tuples used for supervised fine-tuning; quality and diversity both matter.
Eval harnesses
Repeatable prompts and graders that track regressions before and after a model release.
Red-teaming
Adversarial probing for jailbreaks, bias, and policy edge cases—documented for remediation.
Inter-rater reliability
Statistical agreement across annotators; low scores trigger guideline refreshes, not silent shipping.
Data cards
Structured summaries of source, collection, and limitations—downstream teams depend on them.

For leaders

Pay, labs, swag, recognition

Strong chapters earn freelance bands, facetime with research directors from partner labs, drops, live sessions, and internship-style certificates where partners offer them—spelled out when you onboard.

  • 01Lab access — office hours & Q&As with research leaders.
  • 02Income — often ₹200–₹500/day for strong contributors; leader stipends by agreement.
  • 03Swag & live — seasonal kits and hosted sessions (virtual or in person when we run them).
  • 04Certificates — program letters; partner-lab credentials for standouts where available.

FAQ

Operational answers

No—but you should be comfortable reading task guidelines and explaining rubrics. We provide office hours on eval concepts, data cards, and escalation paths.

Campus leader

Apply

Your college, your networks, why you should be the one to bring LLM training work to campus.

Reviewed in batches · you’ll hear from us by email