BI Engineer- Python PySpark developer for Gurgaon Location
Our Client -world’s largest travel technology company
• Overall 5+ years of IT experience in a variety of industries, which includes hands on experience in Big Data Analytics and development
• 2+ yrs of experience in writing pyspark for data transformation.
• 2+ years of experience with detailed knowledge of data warehouse technical architectures, ETL/ ELT, reporting/analytic tools, and data security
• 2+ years of experience in designing data warehouse solutions and integrating technical components
• 2+ yrs of experience leading data warehousing and analytics projects, including using AWS technologies – Redshift, S3, EC2, Data-pipeline and other big data technologies
• 1+ yr of experience of BI implementation in the Cloud.
• Exposure in at least one ETL tool like Informatica will be a plus.
• Exposure to cloud Datawarehouse will be a plus.
• Exposure in at least one reporting tool like Qlikview/Tableau/similar will be a plus.
• Familiarity with Linux/Unix scripting
• Expertise with the tools in Hadoop Ecosystem including Pig, Hive, HDFS, MapReduce, Sqoop, Storm, Spark, Kafka, Yarn, Oozie, and Zookeeper