Primary Skills required
- Proficient in Hadoop and its ecosystem [ HDFS, Hive, HBase, Pig, Zookeeper, Spark ].
- Experience working on different Hadoop distributions, such as Cloudera, Hortonworks, MapR.
- Working knowledge of Python and Shell scripting(Bash). Experience with Java is a plus.
- Prior working experience with any or all of EC2, S3, EBS, ELB, RDS, DynamoDB, EMR
- Installing and Configuring Spark, Hive, Kafka, Oozie, Yarn, HDFS and other components using Cloudera Manager / Apache Ambari
- Hands on Big Data experience – installation, operations, troubleshooting and tuning of key platforms/tools: CDH/HDP/MapR/EMR/HDI/Databricks/Dataproc etc.,
- Some experience with NoSQL technologies like DynamoDB , Redis, MongoDB, Cassandra
- Focused on troubleshooting directly with customers, interacting via multi-channel issue/case management. Examples include installation and upgrade issues, configuration and environmental issues, end user operational and app/job tuning issues.
- Diagnose, troubleshoot and resolve issues related to our platforms.
- Participate in an on-call rotation.
Other skills required
- Strong Linux/Unix system administrator skills
- Strong Hadoop troubleshooting skills
- Customer service experience / strong customer focus
- Good to have a working knowledge of Data integration solution or any ETL tool
- Support and Troubleshooting client issues.
Soft Skills required
- Self-starter who is excited about learning new technologies on a regular basis
- Work on critical, highly complex customer problems that may span multiple services
- Drive customer interactions using Phone, Chat, Emails, Screen shares and conference calls
- Apply advanced troubleshooting techniques to provide unique solutions to our customers individual needs
- Excellent oral and written communication skills
- Learn and use groundbreaking technologies
- Write tutorials, how-to videos, and other technical articles for the customer community.