Logo of Huzzle

Job

Student Researcher (Machine Learning Sys-US) - 2024 Start (PhD)

Logo of ByteDance

ByteDance

2mo ago

💼 Graduate Job

San Jose

AI generated summary

  • PhD student in distributed computing with knowledge of machine learning algorithms, PyTorch, CUDA, and programming languages like C/C++, Python. Experience with GPU/high performance computing, distributed training optimization, AI compiler stacks, large scale systems, and CUDA programming is preferred. Graduating in December 2024 or later with intent to return to degree program.
  • Conduct research to optimize machine learning systems, develop heterogeneous computing architecture, implement model-specific optimizations, and enhance efficiency for large scale distributed training jobs.

Graduate Job

DataSan Jose

Description

  • We are looking for talented individuals to join us for a Student Researcher opportunity in 2024. Student Researcher opportunities at ByteDance aim to offer students industry exposure and hands-on experience. Turn your ambitions into reality as your inspiration brings infinite opportunities at ByteDance.
  • The Student Researcher position provides unique opportunities that go beyond the constraints of our standard internship program, allowing for flexibility in duration, time commitment, and location of work.
  • Candidates can apply to a maximum of two positions and will be considered for jobs in the order you apply. The application limit is applicable to ByteDance and its affiliates' jobs globally. Applications will be reviewed on a rolling basis - we encourage you to apply early.

Requirements

  • Currently in PhD program in distributed, parallel computing principles and know the recent advances in computing, storage, networking, and hardware technologies.
  • Familiar with machine learning algorithms, platforms and frameworks such as PyTorch and Jax.
  • Have basic understanding of how GPU and/or ASIC works.
  • Expert in at least one or two programming languages in Linux environment: C/C++, CUDA, Python.
  • Preferred Qualifications:
  • Graduating December 2024 onwards with the intent to return to degree program after the completion of the position.
  • The following experiences will be a big plus:
  • GPU based high performance computing, RDMA high performance network (MPI, NCCL, ibverbs).
  • Distributed training framework optimizations such as DeepSpeed, FSDP, Megatron, GSPMD.
  • AI compiler stacks such as torch.fx, XLA and MLIR.
  • Large scale data processing and parallel computing.
  • Experiences in designing and operating large scale systems in cloud computing or machine learning.
  • Experiences in in-depth CUDA programming and performance tuning (cutlass, triton).

Education requirements

Currently Studying
PhD

Area of Responsibilities

Data

Responsibilities

  • Research and develop our machine learning systems, including heterogeneous computing architecture, management, scheduling, and monitoring.
  • Manage cross-layer optimization of system and AI algorithms and hardware for machine learning (GPU, ASIC).
  • Implement both general purpose training framework features and model specific optimizations (e.g. LLM, diffusions).
  • Improve efficiency and stability for extremely large scale distributed training jobs.

Details

Work type

Full time

Work mode

office

Location

San Jose