Current Statistics
1,301,530 Total Jobs 268,174 Jobs Today 14,879 Cities 222,737 Job Seekers 146,873 Resumes |
|
|
 |
|
 |
 |
GPU Cluster Resource Scheduling and Optimization Engineer - San Francisco California
Company: Senior Interior Designer Location: San Francisco, California
Posted On: 05/09/2025
GPU Cluster Resource Scheduling and Optimization EngineerAbout UsTogether.ai is driving innovation in AI infrastructure by creating cutting-edge systems that enable scalable and efficient machine learning workloads. Our team tackles the unique challenges of resource scheduling, optimization, and orchestration for large-scale AI training and inference systems.We are looking for a talented AI Workload Resource Scheduling and Optimization Engineer to join our team. This role focuses on designing and implementing advanced scheduling algorithms, resource management strategies, and optimization techniques to maximize performance and minimize costs for large-scale distributed AI workloads.Responsibilities - Resource Scheduling and Allocation:
- Develop and implement intelligent scheduling algorithms tailored for distributed AI workloads on multi-cluster and multi-tenant environments.
- Ensure efficient allocation of GPUs, TPUs, and CPUs across diverse workloads, balancing resource utilization and job performance.
- Performance Optimization:
- Design optimization techniques for dynamic resource allocation, addressing real-time variations in workload demand.
- Implement load balancing, job preemption, and task placement strategies to maximize throughput and minimize latency.
- Scalability and Efficiency:
- Build systems that efficiently scale to thousands of nodes and petabytes of data.
- Optimize training and inference pipelines to reduce runtime and cost while maintaining accuracy and reliability.
- Monitoring and Analytics:
- Build tools for real-time monitoring and diagnostics of resource utilization, job scheduling efficiency, and bottlenecks.
- Leverage telemetry data and machine learning models for predictive analytics and proactive optimization.
- Collaboration and Innovation:
- Collaborate with researchers, data scientists, and platform engineers to understand workload requirements and align resource management solutions.
- Stay updated with the latest trends in distributed systems, AI model training, and cloud-native technologies.QualificationsMust-Have:
- Experience:
- 5+ years of experience in resource scheduling, distributed systems, or large-scale machine learning infrastructure.
- Technical Skills:
- Proficiency in distributed computing frameworks (e.g., Kubernetes, Slurm, Ray).
- Expertise in designing and implementing resource allocation algorithms and scheduling frameworks.
- Hands-on experience with cloud platforms (e.g., AWS, GCP, Azure) and GPU orchestration.
- Programming:
- Proficient in Python, C++, or Go for building high-performance systems.
- Optimization Skills:
- Strong understanding of operational research techniques, such as linear programming, graph algorithms, or evolutionary strategies.
- Soft Skills:
- Analytical mindset with a focus on problem-solving and performance tuning.
- Excellent collaboration and communication skills across teams.Nice-to-Have:
|
 |
 |
 |
 |
|
|