Urgent Looking for C2C Jobs Data Engineer || Plano, TX Onsite 5 days

Contract

Data Engineer Contract Jobs

Data Engineer

Location: Plano, TX Onsite 5 days

CTS/Toyota

 

Certifications Required : Databricks Certified Associate Developer for Apache Spark, AWS Certified Solutions Architect

 

 

 

Job summary:

 

Required Skills : Amazon S3,PySpark,Databricks Workflows, Databricks CLI,Databricks Unity Catalog Admin,Databricks Delta Lake,Databricks SQL,Spark in Scala

***Primarly looking for Spark Scala with Databricks and AWS experience

We are seeking a highly skilled Data Engineer with 8 to 12 years of experience to join our dynamic team.

The ideal candidate will have extensive experience in Spark in Scala, Databricks SQL, Databricks Delta Lake, Databricks Unity Catalog Admin, Databricks CLI, Databricks Workflows, PySpark, and Amazon S3.

This role requires a strong technical background and the ability to work effectively in a collaborative environment.

 

 

Responsibilities:

Lead the development and implementation of data processing workflows using Spark in Scala and Databricks.

Oversee the management and optimization of data storage solutions with Databricks Delta Lake

Provide expertise in Databricks SQL to enable efficient querying and data analysis.

Administer Databricks Unity Catalog to ensure data governance and security.

Utilize Databricks CLI for seamless integration and automation of data workflows.

Develop and manage Databricks Workflows to streamline data processing tasks.

Implement PySpark solutions to enhance data processing capabilities.

Manage and maintain data storage on Amazon S3, ensuring data integrity and accessibility

Collaborate with cross-functional teams to understand data requirements and deliver actionable insights.

Ensure data quality and consistency across various datasets.

Provide technical support and troubleshooting for data-related issues.

Stay updated with the latest advancements in data technologies and tools.

Contribute to the continuous improvement of data processing and analysis methodologies.

 

Qualifications:

Possess a strong background in Spark in Scala with proven experience in developing data processing workflows.

Demonstrate expertise in Databricks SQL for efficient data querying and analysis.

Have extensive experience with Databricks Delta Lake for data storage optimization

Show proficiency in administering Databricks Unity Catalog for data governance.

Utilize Databricks CLI for effective integration and automation of workflows.

Develop and manage Databricks Workflows to streamline data processing tasks.

Implement PySpark solutions to enhance data processing capabilities.

Manage data storage on Amazon S3, ensuring data integrity and accessibility.

Collaborate effectively with cross-functional teams to deliver actionable insights.

Ensure data quality and consistency across various geospatial datasets.

Provide technical support and troubleshooting for data-related issues.

Stay updated with the latest advancements in data technologies and tools.

Contribute to the continuous improvement of data processing and analysis methodologies.

To apply for this job email your details to pratiksha.hatkar@nytpcorp.com