We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Member of Technical Staff, Capacity & Efficiency Infrastructure - MAI Superintelligence Team

Microsoft
$119,800.00 - $234,700.00 / yr
United States, Washington, Redmond
Mar 21, 2026
Overview

Microsoft AI is looking for a Member of Technical Staff - Capacity & Efficiency Infrastructure,to help us improve manage, and improve the efficiency of, our compute fleet.We'reseeking someone who brings an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective. The ideal candidate enjoys building world-class consumer experiences and products in a fast-paced environment. You will actively contribute to the development of AI models powering our innovative products. Expect to wear multiple hats and work across engineering, research, and everything in between. Your contributions will span model architecture, data curation, training and inference infrastructure, evaluation protocols, alignment and reinforcement learning from human feedback (RLHF), and many other exciting topics at thecutting edgeof AI.

Microsoft AI is building the training infrastructure that powers frontier-scale models and advances research toward humanist superintelligence.

As a Member of Technical Staff - Capacity & Efficiency, you will contribute to a fast-moving codebase that enables training at an unprecedented scale. This role will require building software and mathematical models for measuring the effectiveness of our capacity usage and then developing tools and techniques to help us improve. This will require you to partner with ML researchers to scale up the latest research recipes, implement new forms of distributed training parallelism, and ensure the reliability and performance of thousands of GPUs across our supercomputing fleet. Profiling, benchmarking, debugging, and fine-grained optimization are core to this role, demanding both engineering rigor and creativity.

Microsoft Superintelligence Team

MicrosoftSuperintelligence team's mission is to empower every person and every organization on the planet to achieve more. Asemployees,we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

This role is part of Microsoft AI's Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence-ultra-capable systems thatremaincontrollable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanityremainsfirmly in control. We aim to deliver breakthroughs thatbenefitsociety-advancing science, education, and global well-being.

We'realso fortunate to partner with incredible productteamsgiving our models the chance to reach billions of users and createimmensepositive impact. Ifyou'rea brilliant,highly-ambitiousandlow egoindividual,you'llfit right in-comeand join us as we work on our next generation of models!

Microsoft's mission is to empower every person and every organization on the planet to achieve more. Asemployees,we come together with a growth mindset, innovate toempowerothers, andcollaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

By applying to this Mountain View, CA position, you are required to be local to the San Francisco area and in office 4 days a week.



Responsibilities
  • Design, implement, test, and optimize distributed training infrastructure in Python and C++ for large-scale GPU clusters.
  • Build and evolve telemetry systems to provide visibility into infrastructure & ML model performance, utilization, and cost related metrics
  • Profile, benchmark, and debug performance bottlenecks across compute, memory, networking, and storage subsystems
  • Drive architectural improvements across various ML services which deliver measurable efficiency improvements
  • Build and evolve tools to automatically provide insights and recommendations to improve fleet-wide efficiency
  • Optimize collective communication libraries (e.g., NCCL) for emerging NVLink and InfiniBand topologies
  • Partner with ML researchers and infrastructure engineers to understand their plans and future needs and develop plans to balance growth with efficiency
  • Collaborate with hardware teams to optimize for next-generation accelerators (NVIDIA, MAIA, and beyond)
  • Embody ourCultureandValues.


Qualifications

Required Qualifications:

  • Bachelor's Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
    • OR equivalent experience

Preferred Qualifications:

  • Bachelor's Degree in Computer Science or related technical field AND 10+ years technical engineering experience with coding in languages including, but not limited to, C++ or Python OR Master's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C++ or Python
    • OR equivalent experience
  • Deep understanding of the fundamentals of GPU architectures and DL/LLM architectures
  • Deep experience in profiling and analyzing performance in large-scale distributed computing systems.
  • Deep experience in profiling and analyzing performance in ML models especially GenAI models
  • Experience with low-level GPU programming (CUDA, Triton, NCCL) and frameworks such as PyTorch or JAX.
  • Experience in leading technical projects and supporting architectural decisions with data.
  • Experience building infrastructure for large-scale machine learning or generative AI workloads.
  • Experience in networking (InfiniBand, NVLink), storage systems, or distributed training parallelisms.
  • Track record of contributing to high-performance computing or large-scale AI infrastructure projects.


Software Engineering IC4 - The typical base pay range for this role across the U.S. is USD $119,800 - $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 - $258,000 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
https://careers.microsoft.com/us/en/us-corporate-pay

Software Engineering IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
https://careers.microsoft.com/us/en/us-corporate-pay

This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.

Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Applied = 0

(web-bd9584865-vpmzc)