DevOps Engineer
- ₹10L – ₹20L • No equity
- Remote •
- 2 years of exp
- Full Time
Not Available
Onsite or remote
Awanish Kumar Singh
About the job
About Us: Solulever is transforming the manufacturing industry with intelligent, data-driven solutions. As a leader in manufacturing connectivity and intelligence, we provide cutting-edge tools to enhance operational efficiency, scalability, and productivity. We are looking for a skilled DevOps Engineer with a strong understanding of big data pipelines and machine learning environments to join our team and help shape the future of intelligent manufacturing.
Job Description:
We are seeking a DevOps Engineer with experience in managing and optimizing big data pipelines and machine learning environments. The ideal candidate will have a strong foundation in DevOps practices, cloud infrastructure, and automation, as well as hands-on experience with data-intensive applications. You will play a crucial role in designing, implementing, and maintaining robust data pipelines and scalable ML infrastructure, enabling our data science and engineering teams to deliver high-performance solutions.
Key Responsibilities:
• Design, deploy, and maintain scalable infrastructure for big data and machine learning environments.
• Manage and optimize data pipelines to support large-scale data processing using tools like Apache Flink, Hadoop, and Kafka.
• Implement and monitor machine learning model deployment and lifecycle management, ensuring high availability and scalability.
• Collaborate with data scientists, data engineers, and software developers to create seamless integrations and support data-driven projects.
• Develop CI/CD pipelines to automate deployment processes for machine learning models and big data workflows.
• Monitor and troubleshoot production environments, ensuring optimal performance, security, and reliability.
• Implement infrastructure as code (IaC) using tools such as Terraform, Ansible, or CloudFormation.
• Optimize resource utilization and cost efficiency across cloud platforms (AWS, GCP, Azure).
• Stay updated on best practices in DevOps, big data, and ML Ops to continuously improve infrastructure and processes.
Qualifications:
• Bachelor’s degree in Computer Science, Engineering, or a related field.
• 3+ years of experience in DevOps, with a focus on big data and machine learning environments.
• Strong knowledge of cloud platforms (AWS, GCP, Azure) and cloud services relevant to data processing and machine learning.
• Hands-on experience with big data tools such as Apache Flink, Hadoop, Kafka, Druid and HBase.
• Proficiency in automation tools like Terraform, Ansible, and CI/CD pipelines.
• Familiarity with ML Ops tools and frameworks such as MLflow, Kubeflow, or TensorFlow Serving.
• Solid scripting skills in Python, Bash, or similar languages.
• Strong understanding of containerization and orchestration (Docker, Kubernetes).
• Experience with monitoring tools like Prometheus, Grafana, or CloudWatch.
• Excellent troubleshooting, problem-solving, and communication skills.
Preferred Qualifications:
• Experience with data lake architectures and distributed storage systems.
• Familiarity with data engineering workflows and ETL tools.
• Knowledge of data security, governance, and compliance best practices.
• Understanding of Agile methodologies and collaborative DevOps culture.
What We Offer:
• Competitive salary and comprehensive benefits package.
• Opportunities to work with cutting-edge technologies in big data and machine learning.
• Flexible work environment with remote work options.
• Growth and development opportunities in an innovative, fast-paced environment.
• Collaborative culture with talented colleagues who value innovation.