All jobs / Data Engineering

Posted 6d ago (Mar 3, 25)

Data Engineer

United StatesFull-timeAWS LambdaDocker / ECSMSKAirflowDatabricksUnity CatalogPython: (Pandas/Numpy, Boto3, SimpleSalesforce)Databricks (pySpark, pySQL, DLT)Apache SparkKafka and the Kafka Connect ecosystem (schema registry and Avro)Terraform

Shamrock Trading Corporation is looking for a Data Engineer who wants to utilize their expertise in data warehousing, data pipeline creation/support and analytical reporting skills by joining our Data Services team. This role is responsible for gathering and analyzing data from several internal and external sources, designing a cloud-focused data platform for analytics and business intelligence, reliably providing data to our analysts. This role requires significant understanding of data mining and analytical techniques. An ideal candidate will have business acumen, the ability to work effectively with cross-functional teams and strong technical capabilities, including experience in Apache Kafka.

Responsibilities include but are not limited to:

  • Work with Data architects to understand current data models, to build pipelines for data ingestion and transformation.
  • Design, build, and maintain a framework for pipeline observation and monitoring, focusing on reliability and performance of jobs.
  • Surface data integration errors to the proper teams focusing on:
    • Ensuring timely processing of new data
    • Performance of data pipelines
    • Integrity and quality of source data
  • Hands-on experience building data-lake style infrastructures using streaming data set technologies (particularly with Apache Kafka)

Qualifications:

  • Bachelor’s degree in computer science, data science or related technical field, or equivalent practical experience
  • Experience building and maintaining AWS based data pipelines: currently utilizing AWS Lambda, Docker / ECS, MSK, Airflow, Databricks, Unity Catalog
  • Development experience utilizing two or more of the following:
    • Python: (Pandas/Numpy, Boto3, SimpleSalesforce)
    • Databricks (pySpark, pySQL, DLT)
    • Apache Spark
    • Kafka and the Kafka Connect ecosystem (schema registry and Avro)
    • Terraform (or other infrastructure as code platform)
  • Enthusiasm for working directly with customer teams (Business units and internal IT)

Preferred Qualifications:

  • Proven experience with relational and NoSQL databases (e.g. Postgres, Redshift, MongoDB)
  • Experience with version control (git) and peer code reviews
  • Familiarity with data visualization techniques using tools such as Grafana, PowerBI, AWS Quick Sight, and Excel.

Benefits

No benefits provided.Apply for this position
Any feedback or want to report a concern?Help us maintain the quality of jobs posted on Nata in Data!Contact us

Similar jobs

AMEND Consulting logo

AMEND Consulting

Data Engineer

United States
Unysis logo

Unysis

Data Engineer

United States
Nationa Audubon Society logo

Nationa Audubon Society

Junior Data Engineer

United States
Asco logo

Asco

Data Engineer Intern

United States
Coherent Corp logo

Coherent Corp

Intern Data Engineer

United States
Cerrowire logo

Cerrowire

Data Engineer Intern

United States
Harbourvest Partner logo

Harbourvest Partner

Data Engineer

United States
Lattice logo

Lattice

Data Engineer

United States$123k - $154k
Vanda Pharmaceuticals logo

Vanda Pharmaceuticals

Data Engineer

United States
ChenMed logo

ChenMed

Data Engineer

United States
PG$E logo

PG$E

Data Engineer Associate

United States$80k - $120k
Oura logo

Oura

Data Engineer

United States$92k - $110k
Paylocity logo

Paylocity

Associate Data Engineer

United States
Leidos logo

Leidos

Data Engineer Intern

United States$46.8k - $84.6k
Synovus logo

Synovus

IT Data Engineer

United States
Farmland Insurance  logo

Farmland Insurance

Data Analytics Engineer Intern

United States
Rinsed logo

Rinsed

Data and Analytics Engineer

United States$130k - $150k
View all jobs