● Responsible for support and operations of several 100’s of critical Data Analytics Applications, Machine Learning Models and APIs, Microservices built on open source and big data platform based on Hadoop, Yarn, Spark, Airflow, Kafka, Aerospike, MariaDB, EDB, Hbase, etc. running on on-premises Open Shift VM’s and PCF Containers.
● Responsible for big data applications operations architecture, observability automation, capacity planning, cost optimization to continuously improve stability, efficiency and service level objectives.
● Providing best-in-class user support for the Big Data Analytics and Streaming Applications running on our Hadoop ecosystem.
● Troubleshoot incidents, facilitate blameless post-mortems and ensure appropriate remediation
● Engage with development team throughout the life cycle to help develop software for reliability and scale, ensuring minimal refactoring or changes. Identify application patterns and analytics in support of better service level objectives Design and implement auto scaling, self-healing and resiliency patterns
● Design and implement fully automated software and product upgrades, change management, and release management solution for continuous integration and delivery.
● Develop and implement migration to public cloud AWS or GCP.
● Coach and lead teams by s haring in the collective team vision and successfully promoting the why and how to all teams
● Overall 15+ years of experience with at least 4+ years of leading technical teams
● Engineering/Computer Science degree or equivalent experience
● 5+ years of scripting/automation experience (bash, python or perl)
● Strong programming experience in one or more of: Java, Python, Scal