Drag

Data Engineer

Location : ,

Job Description

Job Title: Data Engineer

Location: Cincinnati, OH – Day 1 onsite

 

Need someone with 8-10+ years of experience in the required field.

 

Job Description:

 

We are seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data analytics, expertise in DataStage, and familiarity with Donedo or similar technologies. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data pipelines and analytics solutions that enable actionable insights and strategic decision-making.

 

Responsibilities:

  • Design and implement scalable and robust data pipelines using IBM DataStage to extract, transform, and load (ETL) data from various sources into our data warehouse.
  • Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to understand data requirements and deliver integrated solutions.
  • Develop data models, schemas, and architectures to optimize data storage and retrieval for analytical purposes.
  • Utilize Donedo (or similar tools) for data visualization, reporting, and dashboard development to present insights effectively to stakeholders.
  • Ensure data quality and integrity throughout the data lifecycle by implementing data governance processes and best practices.
  • Participate in performance tuning and optimization of data processes to enhance efficiency and scalability.
  • Stay updated with industry trends and advancements in data engineering, analytics, and cloud technologies.

Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or related field; Master’s degree preferred.
  • Proven experience as a Data Engineer or similar role with expertise in IBM DataStage for ETL processes.
  • Hands-on experience with Donedo or similar data visualization and analytics platforms.
  • Strong understanding of data warehousing concepts, data modeling, and database design principles.
  • Proficiency in SQL and scripting languages (Python, Java, etc.) for data manipulation and analysis.
  • Experience with cloud platforms (AWS, Azure, GCP) and big data technologies (Hadoop, Spark) is a plus.
  • Excellent analytical and problem-solving skills with the ability to work in a fast-paced environment.
  • Strong communication skills and ability to collaborate effectively with technical and non-technical stakeholders.