At Tilde, we believe achieving true progress in interpretability requires innovation across the entire stack. From uncovering the fundamental building blocks of model computation to enabling precise steering of their behavior, we're driven by the fascinating scientific and mathematical problems that underlie true understanding of intelligence.
If coming from an ML background, ideally has a strong track record publishing novel results at top-tier conferences (NeurIPS, ICML, ICLR).
If coming from more of a mathematics or physics backgrounds, ideally has done interesting research or strong competition experience and cares about ML/AI (and has demonstrated this in some way).
(Ideally) Has high-effort blog where they discuss deeply technical concepts (e.g. something like this or this).
Proven expertise designing and managing large-scale, distributed storage solutions, specifically cloud-based systems like Amazon S3 or equivalent.
Substantial hands-on experience with high-performance computing (HPC) infrastructure, including cluster management, resource optimization, and large-scale compute deployments.
Extensive experience writing and optimizing GPU kernels, particularly using Triton, PTX, CUDA, or low-level systems languages (C/C++).
Track record of practical kernel optimizations, demonstrated improvements in training efficiency, and involvement in experimental hardware-software integration.
Strong familiarity with training pipelines, debugging, scaling, and performance optimization of open-source models.
Ability to independently design, run, and iterate large-scale ML experiments.
(Ideally) Has made impactful open-source projects or contributions.
We are fortunate to be advised by:
We also welcome the opportunity to connect with other strong, thoughtful experts who share our vision.