About The Role
RadixArk is building the infrastructure layer for frontier AI systems — unified inference, training, and evaluation stacks powering next-generation LLM applications at scale. Our platform integrates high-performance model serving, RL pipelines, and large-scale distributed systems across GPUs, TPUs, and emerging accelerators. We work at the intersection of systems engineering, AI research, and developer experience. We are looking for a world-class Full Stack Engineer who thrives in high-ambiguity environments and wants to build foundational tools for the next decade of AI.
Requirements
Technical Excellence
4+ years of professional full stack engineering experience
Based in Palo Alto, CA (Hybrid) / Remote (U.S.)
Deep experience in:
Frontend: React / Next.js / TypeScript
Backend: Python (FastAPI / Django) or Node.js
RESTful and/or gRPC API design
Strong understanding of distributed systems concepts
Experience building systems that handle real-time streaming or high-throughput workloads
Strong database design skills (Postgres, Redis, etc.)
Familiarity with containerized environments (Docker, Kubernetes)
Strong software engineering fundamentals: testing, CI/CD, version control, design reviews
Bonus (High Impact)
Experience building developer tools or infrastructure platforms
Experience in ML infrastructure, model serving, or AI tooling
Familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry)
Experience with WebSockets or streaming architectures
Performance optimization at scale
Experience building internal platforms used by engineers
Responsibilities
You will own end-to-end product surfaces across our developer platform:
Design and implement high-performance web applications for AI infrastructure tooling
Build scalable backend systems that interact with distributed inference and training services
Architect APIs and service layers that serve enterprise and research customers
Develop observability dashboards, experiment tracking UIs, and performance analytics tools
Optimize real-time streaming interfaces for model inference workflows
Work closely with ML engineers and infra teams to expose complex systems through intuitive product surfaces
Lead architectural decisions for frontend-backend communication, state management, and performance tuning
Ship production-grade features in fast iteration cycles
You will not just “write UI.”
You will build the control plane for frontier AI systems.
Your work will directly power:
Large-scale LLM inference systems
Reinforcement learning training platforms
Evaluation and experimentation frameworks
Developer-facing AI infrastructure products
This is not CRUD dashboard engineering.
This is building the operating system for AI builders.
About RadixArk
RadixArk is building the infrastructure layer for frontier AI systems — unified inference, training, and evaluation stacks powering next-generation LLM applications at scale.
Our platform integrates high-performance model serving, RL pipelines, and large-scale distributed systems across GPUs, TPUs, and emerging accelerators. We work at the intersection of systems engineering, AI research, and developer experience.
Compensation
Competitive base salary
Meaningful equity ownership
Full benefits
Access to top-tier AI tooling (enterprise Claude, Codex, etc.)
Work alongside world-class AI systems engineers
Equal Opportunity
Competitive salary and equity package.
Work with a world-class team from xAI, Google, and leading research labs.
Directly impact the future of open AI infrastructure.
The most compelling perk in Silicon Valley isn’t 401(k) matching.
It’s working on systems that define the next computing paradigm.If you want to build the infrastructure layer behind frontier AI — and ship fast — we want to talk.
See other positions
