- B2B
- Scale StageRapidly increasing operations
Senior Data Engineer
- Full Time
Not Available
About the job
A market leader in credit intelligence, Reorg brings together journalists, financial analysts, legal analysts, technologists, and data scientists to collect and synthesize highly complex information into actionable intelligence. Since 2013, tens of thousands of professionals across hedge funds, investment banks, management consulting, and law firm verticals have come to rely on Reorg to make better, faster, and more confident decisions in pace with the fast-moving credit markets. For more information, visit: www.reorg.com
Working at Reorg
Consistent with our growth, Reorg hires innovators and trailblazers across the globe to drive our business and our incredible corporate culture alike. Our core values – Action Oriented, Customer First Mindset, Effective Team Players, and Driven to Excel – define an organizational ethos that’s as high-performing as it is human. Among other perks, Reorg employees enjoy competitive health benefits, matched 401k and pension plans, Paid time off, generous parental leave, gym subsidies, educational reimbursements for career development, recognition programs, pet-friendly offices, and much more.
The Role
We are seeking a highly skilled and experienced Senior Data Engineer with a strong background in building and managing data pipelines, data warehouses, and data lakes. As a Senior Data Engineer, you will play a pivotal role in our organization's data infrastructure, enabling efficient and reliable data processing, storage, and analysis.
Responsibilities
- Design and develop robust, scalable, and efficient data pipelines to support the extraction, transformation, and loading (ETL) processes from various data sources into data warehouses and data lakes.
- Collaborate closely with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and design optimal solutions.
- Build and manage data warehouses and data lakes to store and organize large volumes of structured and unstructured data efficiently.
- Implement data governance processes and best practices to ensure data quality, integrity, and security throughout the data lifecycle.
- Identify and address performance bottlenecks, data inconsistencies, and data quality issues in data pipelines, warehouses, and lakes.
- Develop and maintain monitoring and alerting systems to proactively identify and resolve data-related issues.
- Continuously evaluate and explore emerging technologies and tools in the data engineering space to improve data processing efficiency and scalability.
- Mentor and guide junior data engineers, providing technical leadership and fostering a collaborative and innovative environment.
Requirements
- Bachelor's degree in Computer Science or a related field.
- Proven experience (minimum 5 years) in building and managing data pipelines, data warehouses, and data lakes in a production environment.
- Proficiency in programming languages such as Python, SQL and experience with data processing frameworks like Apache Spark or Apache Beam.
- Experience ETL/ELT frameworks and tools like AWS Glue, dbt, Airflow, Airbyte, etc.
- In-depth knowledge of relational databases (e.g., MySQL, PostgreSQL) and experience with columnar storage technologies (e.g., Redshift, Snowflake).
- Strong understanding of distributed systems, data modeling, and database design principles.
- Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and experience in deploying data infrastructure on the cloud.
- Experience with containerization technologies like Docker and container orchestration systems like Kubernetes.
- Excellent problem-solving and troubleshooting skills, with a strong attention to detail.
- Effective communication and collaboration skills to work effectively with cross-functional teams.
- Ability to adapt to a fast-paced and rapidly changing environment.
Preferences
- Experience with real-time data processing frameworks like Kafka, Spark.
- Knowledge of data warehousing concepts and technologies, such as data modeling, star schemas, and OLAP.
- Familiarity with big data technologies and frameworks, such as Hadoop, Hive, and Presto.
Reorg provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Reorg complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
About the company
- B2B
- Scale StageRapidly increasing operations