- Recently fundedRaised funding in the past six months
Founding Software Engineer (Data/Processing)
- €100k – €140k • 0.25% – 1.0%
- 10 years of exp
- Full Time
Not Available
Remote only
About the job
Summary of the Role:
As our Founding Software Engineer with a specialization in backend, data pipelines, and distributed processing, you will be a key player in building the backbone of our technology. You’ll be responsible for designing and implementing robust backend systems, scalable data pipelines, and efficient distributed processing frameworks that will power our core product. This is a unique opportunity to shape the technical foundation of our company and influence our long-term success.
Your Contributions to Our Journey:
Architect Backend Systems: Design, develop, and maintain scalable and secure backend services that handle high volumes of data and transactions.
Build Scalable Data Pipelines: Develop and optimize data pipelines to efficiently process and move large datasets, ensuring data integrity and low-latency access.
Implement Distributed Processing: Create and manage distributed processing systems that handle parallel computation and data processing at scale.
Ensure Data Integrity and Security: Implement best practices for data storage, encryption, and access control to safeguard data integrity and privacy.
Collaborate on API Design: Work closely with frontend engineers and other team members to design and implement RESTful APIs that integrate seamlessly with the frontend.
Optimize for Performance: Continuously monitor and optimize system performance, focusing on improving speed, scalability, and reliability.
Establish Best Practices: Define and enforce best practices for coding, testing, and deployment, ensuring high standards across the engineering team.
Lead and Mentor: As the team expands, mentor junior engineers and lead technical discussions, helping to build a strong engineering culture.
What You Need to Be Successful:
Extensive Experience: 10+ years of experience in backend development, with a focus on data pipelines and distributed processing.
Backend Expertise: Strong proficiency in backend languages and frameworks such as Python, Java, Go, or Node.js, and experience with building microservices.
Data Pipeline Mastery: Expertise in building and optimizing data pipelines using tools like Apache Kafka, Apache Spark, or AWS Glue.
Distributed Systems Knowledge: Experience designing and implementing distributed systems for parallel data processing, with a strong understanding of tools like Hadoop, Spark, or Flink.
Database Proficiency: Deep knowledge of both relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Cassandra, MongoDB), with experience in designing scalable database architectures.
Cloud and DevOps: Familiarity with cloud platforms (e.g., AWS) and experience with containerization (Docker, Kubernetes) and CI/CD pipelines.
Security Focus: Strong understanding of data security and privacy best practices, including encryption and secure data access methodologies.
Problem-Solving Skills: Excellent analytical and problem-solving skills, with the ability to design and implement solutions that are both efficient and scalable.
Collaborative Spirit: Excellent communication and teamwork skills, with the ability to work effectively in a cross-functional team.
Agility and Adaptability: Comfort working in a fast-paced startup environment with the ability to pivot and adapt as needed.
Why Join Us:
Ambitious Challenges: We are using Generative AI (LLMs and Agents) to solve some of the most pressing challenges in cybersecurity today. You’ll be working at the cutting edge of this field, aiming to deliver significant breakthroughs for security teams.
Expert Team: We are a team of hands-on leaders with deep experience in Big Tech and Scale-ups. Our team has been part of the leadership teams behind multiple acquisitions and an IPO.
Impactful Work: Cybersecurity is becoming a challenge to most companies and helping them mitigate risk ultimately helps drive better outcomes for all of us.