Staff Software Engineer(7+ years exp)
Job TypeFull Time
Remote Work PolicyIn office
Must be onsite (Pleasanton) - 5 days/week
About UsageAI: We are a rapidly scaling startup with a proven product-market fit. We are on a mission to build the best platform in the world for companies to manage their software costs as they understand and scale their systems. Our engineering culture values pragmatism, transparency, simplicity, and collaboration. Our overall goal is to do everything necessary to help our users save money on their cloud.
The Role: As a startup we value people who are versatile, technology agnostic and eager to learn. You will get to work with new and popular technologies within a cross-functional team and see the real-time impact of your work across the organization and beyond.
The Opportunity: The early decisions that you make in your product(s) may define the direction that the product and even the company takes in the long run. You will help set up data-driven decision making processes that will guide company decisions in the future. As we scale rapidly, you will have the opportunity to dramatically expand your scope.
What will you do?
- As a technical lead, design and architect data applications for processing data at scale.
- Design and architect scalable ETL pipelines.
- Design data architecture - data streaming, data storage and data management.
- Improve performance scalability and efficiency of data pipelines
- Design and implement recommendation algorithms using time series data.
- Collaborate with a distributed team on solving complex design problems.
What do you need?
- BS degree in Computer Science or relevant field of study.
- At least 7 years of Software Development Experience.
- 5-10 years of industry experience designing, building, distributed systems in production.
- Experience with AWS or GCP or Azure.
- Experience developing scalable applications using Java, Python, Nodejs, Golang, Rust
- Experience designing and developing reliable and fault tolerant applications.
- Strong database fundamentals including SQL, performance, and data modeling.
- 2+ Years of experience working with large scale streaming platforms (e.g. Kafka, Kinesis), data processing framework (e.g., Spark, Apache Flink etc.) and databases (Clickhouse, Druid, Elasticsearch, Kafka)
- 2+ years of experience with workflow management tools like Airflow, Temporal
- Experience optimizing performance of java, python applications, databases like postgres, cassandra, elasticsearch, messaging services like kafka, rabbitmq.
- Interested and motivated to be part of a fast-moving startup.