What we are looking for?
Are you fascinated by data and the thought of building real-time data pipelines that can process massive amounts of data at scale for one of the largest online travel platforms? This is exactly what we do in the Marketplace Dynamics Data Engineering team where our mission is to accelerate data and AI capabilities across Expedia Group.
We are looking for a Software Dev Engineer II to join the Marketplace Dynamics Data Engineering team in Bangalore. Our team builds and maintains data and software products that unlock opportunities for personalized customer experiences and marketing performance optimization. Come, join us on our journey to create the most intelligent travel platform in the world.
We believe in being different and challenging the status quo. We seek new ideas, different ways of thinking, diverse backgrounds and approaches. Expedia is committed to crafting an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
What you will do?
Work with a team of skilled and experienced Big Data engineers to design and code large scale real-time data pipelines on the Cloud.
Accountable for individual tasks and assignments as well as your team's overall productivity.
Prototype creative solutions quickly by developing minimum viable products
Resolve problems and roadblocks as they occur with help from peers. Follow through on details and drive issues to closure
Define, develop and maintain artifacts like technical design or user documentation and look for continuous improvement in software and development process within an agile mindset
Communicate and work effectively with geographically distributed multi-functional teams.
Who you are?
BS+ engineering degree in computer science or a similar field.
5-7+ years of meaningful work experience in Big Data or distributed compute projects.
3+ years’ experience in crafting real-time streaming applications, preferably in Spark, and Kafka/KStreams.
Comfortable programming in Scala, or Java and have hands-on experience in OOAD, design patterns.
Knowledgeable in Hadoop-ecosystem technologies, in particular Hadoop, Hive, and Spark.
Passionate about learning, especially in the areas of micro-services, REST API’s, design, architecture
Experience working with any NoSql database (eg. Cassandra, Mongo, Elasticsearch)
Experience using cloud services (e.g. AWS)
Passionate about working with data and data technologies