Lattice
ingest_graphMultimodal intake & topology
Shards land as structured units—text spans, audio clips, UI traces—with schema hooks so downstream jobs know provenance, locale, and consent scope.
Rank outputs, label what helps or harms, steer models toward answers people want. You bring your college into that loop—friends, clubs, your batch—so the work stays human and honest.
In one line
Add your campus to the graph of people who actually train the LLMs the world uses.
Adding campus leaders
We need people on the ground who can plug their campus into live LLM work—not someday, but now. You are the link between your friends and the feedback that makes models safer and more useful for everyone.
Apply as a campus leaderYour college, on the map
Hostels, clubs, labs, batch groups—wherever people already gather.
Real LLM tasks
Preference rankings, helpfulness labels, policy calls—signal that actually trains models.
You own the vibe
No formal cohort launches—we give briefs and support; you grow it your way.
0+
LLM task lanes
Real queues—rankings, labels, safety notes—that ship into training and eval
₹200–₹500/day
Freelance bands
Typical range for consistent, high-quality work—lane and rubrics decide the rest
0×/mo
Leader sync
Check-ins with 2une plus paths to research leaders from partner AI labs
Honest feedback beats synthetic data. Your job as a leader is to keep that bar high where you study.
Named stack
We name the layers so teams know what moves where: capture, preference formation, and trainer-ready export.
Multimodal intake & topology
Shards land as structured units—text spans, audio clips, UI traces—with schema hooks so downstream jobs know provenance, locale, and consent scope.
Human preference & ranking planes
Pairwise choices, rubric-scored generations, and safety notes become preference tensors: the raw material for reward models and DPO-style alignment.
Artifacts for SFT & eval
Curated JSONL, parquet-backed features, and eval harness seeds ship to training—not a dump folder, but versioned drops with QA gates.
Data → model
Your campus supplies the honest judgments—rankings, labels, safety calls—that turn raw prompts into training-grade data for real LLMs. Ambassadors recruit and coach people on those steps so the signal stays clean.
raw
Prompts, completions, tool traces, multimodal payloads—ingested with metadata and policy tags.
annotate
Guidelines-backed labeling: spans, classifications, pairwise preferences, red-team probes.
qa
Inter-rater agreement, gold sets, and escalation queues before anything is promotion-ready.
export
Versioned drops for SFT pairs, RM training, and offline eval—wired to your MLOps contract.
For leaders
Lightweight rhythm: recruit people you trust, keep the bar high, stay in sync with us—without running a formal ambassador “cohort program.”
Tell us your college, networks, and how people already find you—we match task lanes, tooling, and expectations.
Invite through societies, batch chats, and friends. We share talking points and assets; you grow it organically—no mandatory cohort kickoffs.
Watch rubric drift, answer questions fast, escalate edge cases—you are the local owner of honest, consistent feedback.
Strong campuses get access to richer task types and lab touchpoints; we reward clean work, not noise.
Roles
Clear hats: leads operate the program; contributors generate signal; partners lend distribution and credibility.
Recruit, run sessions, and keep feedback honest. Top leads unlock facetime with research directors, freelance pay, swag, live meetups, and internship-style certificates from partner labs where available.
Students executing tasks: pairwise rankings, span labels, policy annotations—paid, async-first, rubric-bound.
Official partners for workshops on eval design, data ethics, and tooling—aligned to real queue economics.
Dorms, batches, societies: dense graphs reduce cold-start friction and improve guideline adherence.
Lexicon
You do not need a PhD—but you should recognize the vocabulary your contributors will hear in tooling and briefings.
For leaders
Strong chapters earn freelance bands, facetime with research directors from partner labs, drops, live sessions, and internship-style certificates where partners offer them—spelled out when you onboard.
FAQ
No—but you should be comfortable reading task guidelines and explaining rubrics. We provide office hours on eval concepts, data cards, and escalation paths.
Campus leader
Your college, your networks, why you should be the one to bring LLM training work to campus.
Reviewed in batches · you’ll hear from us by email