Senior Data Lake Engineer

 (3+ years exp)
$70k – $100k • 0.25% – 0.5%
Published: 7 months ago
Avatar for ExperienceFlow.ai

ExperienceFlow.ai

Digital Nervous System: Autonomously drive optimal financial & customers outcome

Job Location

Job Type

Full Time

Visa Sponsorship

Available

Remote Work Policy

Onsite or remote

Hires remotely in

Preferred Timezones

Eastern Time

Relocation

Allowed

Skills

Python
ETL
Snowflake
AWS/EC2/ELB/S3/DynamoDB
Apache Spark
ElasticSearch
Arango DB
Apache Kafka
Data Extraction
ETL/ELT
Pinot
Airbyte
Trino

The Role

Job Description
Do you love to program? Are you excited about new technology experimentation? Are you looking for a new challenge that stretches your talents? Then this could be the role for you.
At ExperienceFlow.ai, we are looking for a Big Data Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives.
Objectives of this Role
• Develop and implement data pipelines that extracts, transforms and loads data into an information product that helps to inform the organization in reaching strategic goals
• Work on ingesting, storing, processing and analyzing large data sets
• Create scalable and high-performance web services for tracking data
• Translate complex technical and functional requirements into detailed designs
• Investigate and analyze alternative solutions to data storing, processing etc. to ensure most streamlined approaches are implemented
• Serve as a mentor to junior staff by conducting technical training sessions and reviewing project outputs
Daily and Monthly Responsibilities
• Develop and maintain data pipelines implementing ETL processes
• Take responsibility for Hadoop development and implementation
• Work closely with a data science team implementing data analytic pipelines
• Help define data governance policies and support data versioning processes
• Maintain security and data privacy working closely with Data Protection Officer internally
• Analyze a vast number of data stores and uncover insights
Skills and Qualifications
• Experience in Python, Spark and Hive
• Understanding of data warehousing and data modeling techniques
• Knowledge of industry-wide analytical and visualization tools (Tableau and R)
• Strong data engineering skills on the Azure Cloud Platform is essential
• Streaming frameworks like Kafka
• Knowledge of core Java, Linux, SQL, and any scripting language
• Good interpersonal skills and positive attitude
Preferred Qualifications
• Degree in computer sciences, maths or engineering
• Expertise in ETL methodology of Data Extraction, Transformation and Load processing in corporate wide ETL solution design using Data Stage

Similar Jobs

TileDB company logo
TileDB
A universal engine that allows you to access, analyze, and share any type of data
Sponsor a Pet company logo
Sponsor a Pet
We are a fundraising company for animal non-profits
Archesys company logo
Archesys
Improving the government services that impact everyday lives
Klaviyo company logo
Klaviyo
Klaviyo is a unified customer platform for email, SMS, and more. Own your consumer data an
Klaviyo company logo
Klaviyo
Klaviyo is a unified customer platform for email, SMS, and more. Own your consumer data an
Archesys company logo
Archesys
Improving the government services that impact everyday lives
Archesys company logo
Archesys
Improving the government services that impact everyday lives
Veeva Systems company logo
Veeva Systems
Enterprise cloud software for life sciences