Avatar for Clutch
Clutch
Actively Hiring
Reinventing Car Buying and Selling
  • B2C
  • Scale Stage
    Rapidly increasing operations
  • Recently funded
    Raised funding in the past six months

Staff Data Engineer

Posted: 3 weeks ago
Visa Sponsorship

Not Available

RelocationAllowed

About the job

About Clutch:

Clutch is Canada’s largest online used car retailer, delivering a seamless, hassle-free car-buying experience to drivers everywhere. Customers can browse hundreds of cars from the comfort of their home, get the right one delivered to their door, and enjoy peace of mind with our 10-Day Money-Back Guarantee… and that’s just the beginning.

Named one of Canada’s top growing Companies two years in a row and also awarded a spot on LinkedIn’s Top Canadian Startups list, we’re looking to add curious, hard-working, and driven individuals to our growing team.

Headquartered in Toronto, Clutch was founded in 2017. Clutch is backed by a number of world-class investors, including Canaan, BrandProject, Real Ventures, D1 Capital, and Upper90. To learn more, visit clutch.ca.

What you'll do:

  • Lead the development, testing, and maintenance of complex data management solutions that support business goals and drive decision-making processes at scale.
  • Architect, design, and implement sophisticated ETL/ELT processes to manage complex data transformations, ensuring efficiency, reliability, and scalability of data pipelines.
  • Proactively identify and resolve critical data quality issues, implementing data governance practices and leading regular audits to maintain data accuracy and integrity across multiple data sources.
  • Optimize and evolve data integration processes, applying best practices for performance, scalability, and security to meet growing business demands.
  • Collaborate closely with data architects and other senior stakeholders to design and implement data transformations that align with evolving business requirements and future-proof data architecture.
  • Lead the adoption and implementation of modern data frameworks, including data lakes, data warehouses, and cloud-based architectures, to enhance business intelligence and analytics capabilities.
  • Champion the use of DevOps tools (e.g., Git, GitHub Actions, Docker) for code versioning, deployment automation, and ensuring continuous integration, delivery, and real-time monitoring of data pipelines.
  • Document and standardize data definitions, processes, and solutions, driving the establishment of clear data standards and ensuring cross-team communication and knowledge sharing.
  • Ensure data solutions adhere to the highest standards for security, scalability, and reliability, guiding teams in following industry best practices and company policies.

What we're looking for:

  • Bachelor’s or Master’s degree in Computer Science, Mathematics, Information Systems, or a related technical field.
  • 5+ years of experience in data engineering, data architecture, or a related field, with demonstrated leadership experience in complex data initiatives.
  • Advanced programming skills in languages such as Python, TypeScript, and JavaScript, with a deep understanding of software engineering principles.
  • Expert-level SQL knowledge, with a proven track record of writing and optimizing complex queries and database performance for large-scale systems (e.g., PostgreSQL, MySQL, SQL Server).
  • Extensive experience with modern database technologies, including both relational databases (e.g., Oracle, PostgreSQL, AWS Aurora) and NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB), as well as cloud-based data solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake).
  • In-depth experience with ETL/ELT tools such as Apache Airflow, Talend, or Informatica, and a demonstrated ability to design efficient data flows for large datasets.
  • Hands-on experience with cloud platforms (AWS preferred) such as AWS, GCP, or Azure, and deep knowledge of their data services (e.g., AWS Glue, AWS S3, SageMaker, Azure Data Factory, GCP Dataflow).
  • Familiarity with data warehousing and big data technologies, including Hadoop, Spark, Kafka, for both real-time data streaming and batch processing.
  • Strong understanding of DevOps methodologies, with hands-on experience in using tools like Docker, Kubernetes, Datadog, Terraform, and GitHub Actions for managing and optimizing data pipeline deployments.
  • Proven ability to lead and mentor junior engineers, driving innovation and best practices in data engineering.

Why you’ll love it at Clutch:

  • Autonomy & ownership -- create your own path, and own your work
  • Competitive compensation and equity incentives!
  • Generous time off program
  • Health & dental benefits

Clutch is committed to fostering an inclusive workplace where all individuals have an opportunity to succeed. If you require accommodation at any stage of the interview process, please email [email protected].

About the company

Clutch company logo

Clutch

Actively Hiring
Reinventing Car Buying and Selling201-500 Employees
Company Size
201-500
Company Type
E Commerce
Company Industries
Cars
  • B2C
  • Scale Stage
    Rapidly increasing operations
  • Recently funded
    Raised funding in the past six months
Learn more about Clutch image

Funding

AMOUNT RAISED
$68.5M
FUNDED OVER
3 rounds
Rounds
A
$60,000,000
Series A - Mar 2021+2

Founders

Steve Seibel
Founder • 3 years • 8 years
Toronto
image
View the team image

Similar Jobs

SuperMoney company logo
SuperMoney
Helping people achieve their financial goals
TODAQ company logo
TODAQ
Cryptographic Object Systems without Ledgers
deepPIXEL company logo
deepPIXEL
deepPiXEL is an AI platform that uses AI to help companies and humans
GPTZero company logo
GPTZero
GPTZero is building the verification layer for the world's information
Bounce company logo
Bounce
Social Experiences and P2P Payments in 3-Clicks
Fathom company logo
Fathom
deep learning to automate medical coding