
• Design and develop high quality, secure, scalable software solutions based on technical requirements specifications and design artifacts within expected time and budget
• Support and Work on cloud data platforms – Databricks/Snowflake - adhering to industry standards.
• Build governance framework for cloud data storage and usage for enterprise data platforms.
• Collaborate on cross-functional agile teams - that include Developers and Product Owners
• Stay abreast of Data Platform technology trends and industry best practices to hone and maintain your talent
• Participate in architectural discussions , iteration planning, and feature sizing meetings
• Plan, manage, and oversee all aspects of a Production Environment for the Program
• Find ways for Continuous Optimizations in a Production Environment.
• Ability to understand MTTR, SLO, SLI definitions and apply them to services.
• Respond to Incidents and improvise platform based on feedback and measure the reduction of
• Incidents over time.
• Maintain services once they are live by measuring and monitoring availability, latency and overall system health.
• Practice sustainable incident response and blameless postmortems.
• Engage in and improve the whole lifecycle of services—from inception and design, through deployment, operation and refinement.
• Analyze ITSM activities of the platform and provide feedback loop to development teams on operational gaps or resiliency concerns.
• Support the application CI/CD pipeline for promoting software into higher environments through validation and operational gating, and lead Mastercard in DevOps automation and best practices.
• Knowledge of industry-standard CI/CD tools like Git/Bitbucket, Jenkins, Maven, and Artifactory.
• Engage in development, automation, and business process improvement.
• Increase automation and tooling to reduce manual intervention.
• Competent in Linux and Unix Shell Scripting
• Take a holistic approach to problem solving, by connecting the dots during a production event through the various technology stack that makes up the platform, to optimize mean time to recover.
• Work with a global team spread across tech hubs in multiple geographies and time zones.
• Ability to share knowledge and explain processes and procedures to others.
• Support application health, performance, and capacity.
• Assist in system design consulting, capacity planning, and launch reviews.
• Collaborate with development and product teams to establish monitoring and alerting strategies.
• 2-4 years of hands on experience with Cloud data platform• Hands on experience in Databricks , AWS and deep understanding of its architecture
• Experience managing, developing and supporting data lakes, lakehouses, warehouse on premise and on cloud, ETL solutions and other analytics solutions.
• Experience working with development/engineering teams in a global environment
• In depth practical experience on native cloud services - Azure and/or AWS.
• Strong understanding of cloud DevOps practices and implementation.
• Experience in cloud data migration for large sized enterprise data warehouse environments.
• Understanding of cloud data governance include compute optimization, cost control, user profile management etc.
• In depth understanding of cloud operations strategy and user management across various cloud providers and cloud platforms.
• Strong understanding and hands-on on cloud security models, encryption strategy, network layers for on-prem + cloud hybrid model and other related concepts.
• Experience in establishing new engineering team and working towards taking the team to steady state
• Understanding of database technologies like Hadoop/Oracle /SQL server databases.