• Role focuses on Feature Engineering/Data Transformation for downstream analytics processes
• Minimum 5+ years of experience
• Skills – PySpark , UNIX, Data Bricks, Azure platform, Datalake, Blob, Python(good to have).
• Must be able to design a solution in ETL based on the requirements
• Must have strong SQL writing experience
• Must be good at data analysis and profiling
• Experience on Teradata/any RDBMS is a plus
• Must be able to work independently and prioritize work across multiple projects