Sr PySpark Developer (Bigdata)AVP- C12 - CHENNAI
We are looking for a highly skilled PySpark Developer with deep expertise in Distributed data processing. The ideal candidate will be responsible for optimizing Spark Jobs and ensuring efficient data processing in a Big Data platform. This role requires a strong understanding of Spark performance tuning, distributed computing, and Big data architecture.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data system.
- 8+ years of relevant experience in Apps Development or systems analysis and Ability to adjust priorities quickly as circumstances dictate
Key Responsibilities:
- Analyze and comprehend existing data ingestion and reconciliation frameworks
- Develop and implement PySpark programs to process large datasets in Hive tables and Big data platforms
- Perform complex transformations including reconciliation and advanced data manipulations
- Fine-tune Spark jobs for performance optimization, ensuring efficient data processing at scale.
- Work closely with Data Engineers, Architects, and Analysts to understand data reconciliation requirements
- Collaborate with cross-functional teams to improve data ingestion, transformation, and validation workflows
Required Skills and Qualifications:
- Extensive hands-on experience with Python, PySpark, and PyMongo for efficient data processing across distributed and columnar databases
- Expertise in Spark Optimization techniques, and ability to debug Spark performance issues and optimize resource utilization
- Proficiency in Python and Spark DataFrame API, and strong experience in complex data transformations using PySpark
- Experience working with large-scale distributed data processing, and solid understanding of Big Data architecture and distributed computing frameworks
- Strong problem-solving and analytical skills.
- Experience with CI/CD for data pipelines
- Experience with SnowFlake for data processing and integration
Education:
- Bachelor’s degree/University degree or equivalent experience in Computer science
- Master’s degree preferred
We are looking for a highly skilled PySpark Developer with deep expertise in Distributed data processing. The ideal candidate will be responsible for optimizing Spark Jobs and ensuring efficient data processing in a Big Data platform. This role requires a strong understanding of Spark performance tuning, distributed computing, and Big data architecture.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data system.
- 8+ years of relevant experience in Apps Development or systems analysis and Ability to adjust priorities quickly as circumstances dictate
Key Responsibilities:
- Analyze and comprehend existing data ingestion and reconciliation frameworks
- Develop and implement PySpark programs to process large datasets in Hive tables and Big data platforms
- Perform complex transformations including reconciliation and advanced data manipulations
- Fine-tune Spark jobs for performance optimization, ensuring efficient data processing at scale.
- Work closely with Data Engineers, Architects, and Analysts to understand data reconciliation requirements
- Collaborate with cross-functional teams to improve data ingestion, transformation, and validation workflows
Required Skills and Qualifications:
- Extensive hands-on experience with Python, PySpark, and PyMongo for efficient data processing across distributed and columnar databases
- Expertise in Spark Optimization techniques, and ability to debug Spark performance issues and optimize resource utilization
- Proficiency in Python and Spark DataFrame API, and strong experience in complex data transformations using PySpark
- Experience working with large-scale distributed data processing, and solid understanding of Big Data architecture and distributed computing frameworks
- Strong problem-solving and analytical skills.
- Experience with CI/CD for data pipelines
- Experience with SnowFlake for data processing and integration
Education:
- Bachelor’s degree/University degree or equivalent experience in Computer science
- Master’s degree preferred
------------------------------------------------------
Job Family Group:
Technology------------------------------------------------------
Job Family:
Applications Development------------------------------------------------------
Time Type:
Full time------------------------------------------------------
Citi is an equal opportunity and affirmative action employer.
Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi.
View the "EEO is the Law" poster. View the EEO is the Law Supplement.
View the EEO Policy Statement.
View the Pay Transparency Posting
Featured Career Areas
Saved Jobs
You have no saved jobs
Previously Viewed Jobs
You have no viewed jobs