At Tilde, we believe true progress in interpretability demands innovation across the entire stack. From identifying human-interpretable features and uncovering optimal sparse subnetworks to precisely steering model behavior, we're driven by the fascinating scientific and mathematical problems that underlie true model understanding.
We're looking for looking for researchers with deep expertise in mathematics or physics, who are passionate about doing frontier-extending AI interpretability work. We believe that the principles underlying true model understanding lie in rigorous mathematical and scientific discovery, not simply in existing ML techniques.
While some ML experience (interp or otherwise) is helpful, it’s not required—we’ll teach you what you need to get started, and the rest you will absorb as you progress and innovate.
What we genuinely value is your research taste: your ability to identify the truly impactful questions and pursue them relentlessly.
We need research engineers with serious ML engineering backgrounds and a track record of carrying out complex, scalable experiments. Your work will be essential to defining our research agenda—designing, building, and iterating on the experiments that will shape the future of interpretability.
While a background in interpretability isn’t required, strong engineering judgment and applied research taste are crucial.
Empirics drive the questions we pursue: the experiments you run will reveal the most promising research directions, and in this role you will work closely with researchers to figure out the best way to push Tilde forward.
We are fortunate to be advised by:
We also welcome the opportunity to connect with other strong, thoughtful experts who share our vision. This could mean technical input on specific research and engineering directions or strategic guidance on our long-term path and vision.