We're building tools to measure, monitor, and mitigate bias in AI-generated content.
Our mission is to make AI more transparent, fair, and accountable.
Our work is informed by leading research in the field of responsible AI.
Seminal work identifying gender stereotypes in word embeddings and proposing methods to debias them.
Techniques to identify and reduce harmful content generated by advanced AI models.
Analysis of bias propagation in multimodal AI systems combining text, image, and audio data.