HUMAIN: HUman-centered MAchine INtelligence
We are a group of researchers passionate about AI and sometimes worried about it. We think deeply about how to build controllable machine intelligence that works in the best interests of people.
What do we believe in?
Machine intelligence should be useful, controllable, and understandable, especially when it interacts with humans at scale. Capability alone is not enough; controllability and interpretability matter just as much.
What are we thinking about these days?
Foundations
We study how modern machine learning systems (especially large language models) generalize, and when they may fail.
A key question we are currently exploring is disentanglement. Learning at today’s scale is deeply interleaved: we cannot easily separate what a model has learned from what it has not, nor can we reliably determine how different capabilities extrapolate. We aim to develop methods to disentangle the learning process itself, analyze what trained models have actually learned (e.g., separating knowledge from skills), and design training procedures that are disentangled from the ground up.
Personal AI assistants
How do we work?
We value curiosity and believe research should be fun and fulfilling. Work-life balance matters. We are question-driven and learn whatever we need (technical or not) to answer the questions we care about.
Prospective students
If you are interested in joining us, please fill in the Google form.