Back to all positions
ResearchFull TimeHybrid, Gurugram, India

AI Researcher

Responsible AI Labs

About the Role

We're looking for an AI Researcher to join our team and drive foundational research in responsible AI evaluation and alignment. You'll design novel approaches to measuring fairness, safety, reliability, and other critical dimensions of AI system behavior.

Your work will directly shape the RAIL scoring framework used by organizations worldwide to evaluate and improve their AI systems. You'll collaborate closely with our engineering team to translate research insights into production-grade evaluation tools.

This is an opportunity to do meaningful research that has immediate real-world impact, bridging the gap between academic AI safety research and practical industry deployment.

Responsibilities

  • Design and conduct research on AI evaluation methodologies across fairness, safety, reliability, transparency, privacy, accountability, inclusivity, and user impact dimensions
  • Develop novel scoring rubrics and calibration techniques for multi-dimensional AI assessment
  • Publish research findings in top-tier venues (NeurIPS, ICML, FAccT, AIES)
  • Collaborate with engineering to integrate research prototypes into the RAIL scoring pipeline
  • Analyze large-scale evaluation datasets to identify patterns and improve scoring accuracy
  • Stay current with AI safety and alignment literature, synthesizing relevant findings for the team
  • Mentor junior researchers and contribute to a culture of rigorous scientific inquiry

Requirements

  • PhD or equivalent research experience in Machine Learning, NLP, AI Safety, or a related field
  • Strong publication record in relevant top-tier conferences or journals
  • Deep understanding of bias, fairness, and safety evaluation in language models
  • Proficiency in Python and modern ML frameworks (PyTorch, HuggingFace Transformers)
  • Experience with large language model evaluation and benchmarking
  • Excellent written and verbal communication skills

Nice to Have

  • Experience with RLHF, constitutional AI, or other alignment techniques
  • Familiarity with regulatory frameworks for AI (EU AI Act, NIST AI RMF)
  • Prior experience building evaluation datasets or annotation pipelines
  • Contributions to open-source AI safety tools or frameworks

Apply for this Position

Fields marked with * are required.

Personal Information

Links

Experience

Documents

Click to upload or drag and drop

PDF only, max 5MB

Click to upload or drag and drop

PDF only, max 5MB

Tell Us About Yourself

0/500

Responsible AI Labs does not discriminate on the basis of race, religion, color, national origin, gender, gender identity or expression, sexual orientation, age, marital status, disability, or veteran status.

Interested?

Fill out the application form below to apply for this role.

Apply Now

Benefits & Perks

Hybrid Work Culture

Enjoy the best of both worlds with a flexible hybrid model at our Gurugram office.

Learning Budget

Annual stipend for courses, conferences, and books to fuel your professional growth.

Flexible Hours

Set your own schedule. We focus on outcomes, not hours logged.

Health & Wellness

Comprehensive health benefits and wellness programs to keep you at your best.

Equity Participation

Share in our success with competitive equity packages for all full-time roles.

Team Retreats

Annual in-person gatherings to connect, collaborate, and celebrate together.