FAQs
What is the primary focus of the Python Pyspark job role?
The primary focus is on implementing and automating data ingestion solutions using Hadoop, Sqoop, Hive, Impala, and Spark, while also handling Linux/Unix Shell scripting, SQL, and ETL processes.
What qualifications are required for this position?
A minimum of 6+ years of relevant experience in Big Data and Hadoop frameworks, hands-on experience with data ingestion tools, and proficiency in Linux/Unix Shell scripting are required.
Is knowledge of Python and R mandatory for this job?
Knowledge of Python and R is a plus but not mandatory for this position.
What methodologies does this role require experience in?
Experience in Agile Scrum methodology is required for this role.
Are there any additional tools or technologies I should be familiar with?
Good to have knowledge of ServiceNow, Jenkins, Git, Bitbucket, JIRA, and other DevOps tools is beneficial for this position.
Is experience in debugging and performance tuning of big data pipelines required?
Yes, hands-on experience in debugging, performance tuning, and troubleshooting big data pipelines is a necessary requirement.
What work environment does Virtusa promote?
Virtusa promotes teamwork, professional and personal development, and values collaboration and a dynamic environment for nurturing new ideas.
Does Virtusa have a non-discrimination policy?
Yes, Virtusa has a firm non-discrimination policy and evaluates all employment decisions based on qualifications, merit, and business needs, without discrimination based on race, gender, or other protected categories.