DB Team Data Engineer - Software Engineer-Other - Intermediate


Job Details

Job Description:

We are seeking a Data Engineer for a very important client

Job responsibilities

Execute software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.

Write secure and high-quality code and maintains algorithms that run synchronously with appropriate systems.

Produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development.

Apply knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation.

Apply technical troubleshooting to break down solutions and solve technical problems of basic complexity.

Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems.

Proactively identify hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture.

Contribute to software engineering communities of practice and events that explore new and emerging technologies.

Add to team culture of diversity, equity, inclusion, and respect.

Requirement:

Required qualifications, capabilities, and skills.

4 to 7 years of Spark on Cloud development experience

4 to 7 years of strong SQL skills; Teradata is preference but experience in any other RDBMS

Proven experience in understanding requirement related to extraction, transformation, and loading (ETL) of data using Spark on Cloud

Formal training or certification on software engineering concepts and 3+ years applied experience.

Ability to independently design, build, test, and deploy code Should be able to lead by example and guide the team with his/her technical expertise.

Ability to identify risks/issues for the project and manage them accordingly.

Hands-on development experience and in-depth knowledge of Java/Python, Microservices, Containers/Kubernetes, Spark, and SQL.

Hands-on practical experience in system design, application development, testing, and operational stability

Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages.

Proficient in coding in one or more programming languages

Experience across the whole Software Development Life Cycle

Proven understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security

Proven knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)

Preferred qualifications, capabilities, and skills

Knowledge about Data warehousing Concepts.

Experience with Agile based project methodology.

Ability to identify risks/issues for the project and manage them accordingly.

Knowledge or experience on ETL technologies like Informatica or Ab-initio would be preferable

People management skills would be given preference but is not mandatory.

MUST have
- teradata, DBMS knowledge
- Cloud knowledge - AWS preferably
- ETL knowledge
- CICD and data warehouse concept
- Java & Spark

NICE To Have
- Abinitio
- postgres DB knowledge
- python

Required
Datamart
Intermediate 7-8
postgresql
Intermediate 7-8
Data Warehousing Expert 9-10

Preferred
SQL
Intermediate 7-8
ETL Tools
Intermediate 7-8
#Pando





 ATR International

 05/07/2024

 Columbus,OH