- B2B
- Scale StageRapidly increasing operations
- Top InvestorsThis company has received a significant amount of investment from top investors
Senior Machine Learning Engineer
- Full Time
Not Available
About the job
We believe small businesses are at the heart of our communities, and championing them is worth fighting for. We empower small business owners to manage their finances fearlessly, by offering the simplest, all-in-one financial management solution they can't live without.
About the role
As a Senior Machine Learning Engineer at Wave, you will help shape the future of our AI capabilities by combining traditional machine learning with cutting-edge Generative AI technologies. You’ll be responsible for developing, scaling, and maintaining production-grade ML models while applying innovative GenAI approaches to solve customer and business challenges. This role offers an opportunity to work with a modern, cloud-native data and AI stack, where traditional ML systems and emerging GenAI solutions coexist to deliver secure, reliable, and scalable AI services at Wave, in close collaboration with cross-functional teams.
Here’s How You Make an Impact:
Collaborative AI Development: Collaborate with product and technical teams to develop and deploy both traditional ML models and GenAI systems, leveraging their complementary strengths and pooling your expertise to address Wave’s business objectives.MLOps Automation and Scalability: Drive the automation and scalability of Wave’s MLOps stack, supporting the scaling of production-ready ML models. Leverage SageMaker Feature Store and our data lake for model training, ensuring efficient deployment and continuous improvement for both batch and real-time inference.GenAI Solutions: Design and build GenAI proofs-of-concept (PoCs), evaluating and deploying those that meet performance and business impact criteria into production. Leverage AWS cloud services, including SageMaker, Bedrock, and Amazon Q to develop GenAI applications that solve complex problems for Wave’s customers and internal teams using techniques like retrieval-augmented generation (RAG) and agentic design patterns.Unified API Development: Contribute to the development of a unified API to provide secure, scalable, and reliable access to AI models, empowering cross-functional teams to integrate AI into their products and services.Model Reliability, Compliance, and Risk Mitigation: Monitor the reliability of all deployed models, enhancing their observability and explainability to meet ethical and regulatory standards. Identify and mitigate risks associated with AI systems, including concerns about bias, performance, and economics, while upholding responsible AI principles and security standards.Mentorship and Best Practices: Mentor other engineers, sharing best practices for model development, deployment, and monitoring in both traditional ML and GenAI contexts.
Succeeding at Wave: At Wave, you will have the opportunity to continuously learn and grow by exploring cutting-edge ML frameworks, developing innovative GenAI projects, as well as mentoring and guiding peers. Whether you are working remotely or collaborating in our downtown Toronto innovation hub, you will have the freedom to define your path to success. At Wave, we value diverse perspectives, and we encourage open, respectful feedback to foster an inclusive environment where innovation thrives and every team member can grow.
You Thrive Here By Possessing the Following:
- 5+ years of experience in building and deploying production-grade machine learning systems.
- Strong Python and SQL skills, with familiarity in containerization tools like Docker, and an understanding of secure model access via APIs.
- Familiarity with AWS cloud services, including SageMaker, Amazon Bedrock, and Amazon Q.
- Experience automating MLOps pipelines and deploying models for both batch and real-time inference.
- Hands-on expertise in developing GenAI proofs-of-concept and taking them through to production, as well as experience scaling traditional ML models for production at scale.
- A solid understanding of AI explainability, including experience with model interpretability techniques, observability, and monitoring to ensure performance and reliability.
- Familiarity with applying FinOps concepts to optimize cloud resources, reduce costs, and ensure the efficient scaling of AI models.
- An appreciation for both the opportunities and challenges associated with Generative AI, including the ability to identify risks, manage ethical considerations, and maintain a balanced approach to applying new technologies.
- A collaborative mindset, with strong communication skills and a passion for mentoring and growing the capabilities of the ML team.
Nice to Haves:
- Experience with parameter-efficient fine-tuning techniques, such as Low-Rank Adaptation (LoRA).
- Understanding of modern deep learning architectures, including transformers and Mamba.
- Experience publishing or presenting at leading conferences, including NeurIPS, ICML, ICLR, AAAI, or ACL.
About the company
- B2B
- Scale StageRapidly increasing operations
- Top InvestorsThis company has received a significant amount of investment from top investors