Sapient Publicis

OPENS AT: Jan 21, 12:30 PM

CLOSES AT: Jan 30, 06:25 PM

DURATION: 3h

Publicis Sapient Data Engineer Hiring Challenge

Online Participation is confidential LIVE
Backend Challenge
Data Engineer
4 years+
Experience
Best in industry

Social Share

Invite friends via email

ABOUT CHALLENGE

Experience Technologists: If you’re ready for next-level impact, we’re ready for you – Register for the challenge Now

About Challenge

At Publicis Sapient, we believe that all the good things happen when great minds come together. And for 30 years, our secret to success has remained just that – through enabling our people to do the work that matters to them, we have been able to unleash an enduring culture of problem-solving creativity. We are on a mission to transform the world, and you will be instrumental in shaping how we do it.

Participate in our hiring challenge, create lasting Impact!

If you are someone who is passionate about designing robust solutions that help organizations stay relevant in our evolving digital world – we can’t wait for you to join us!

At Publicis Sapient, our Data Engineering teams have always been at the forefront of innovation, while leveraging their collective problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across diverse industries.

Register below if you have 4+ years’ experience in Data technologies, Hadoop, Big Data, Hive Spark, Scala/Python, Kafka, NOSQL &  Cloud and can join us.

Eligibility:

  • Should have 4+ years of experience can participate in our hiring challenge. 
  • The candidate should 
    • have worked on Spark/Flink/Apache Beam
    • have worked on Python/Scala as a coding language
    • have worked on any MPP database (redshift, snowflake, bigquery etc)
    • be able to write complex SQL queries

Challenge Format:

  • 10 MCQs
  • 1 Programming Question
  • 1 SQL Question 

OPEN POSITION

Data Engineer
Experience: 4 years+
Compensation: Best in industry
Job Location: Gurgaon/Noida/Bangalore

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to ensure the necessary health of the overall solution  

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and Data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.

Role & Responsibilities:

Your role is focused on Design, Development and delivery of solutions involving:

  • Data Ingestion, Integration and Transformation
  • Data Storage and Computation Frameworks, Performance Optimizations
  • Analytics & Visualizations
  • Infrastructure & Cloud Computing
  • Data Management Platforms
  • Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
  • Build functionality for data analytics, search and aggregation

Experience Guidelines: 

Mandatory Experience and Competencies:

  1. Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
  2. Minimum 1.5 years of experience in Big Data technologies
  3. Hands-on experience with the Hadoop stack – HDFS, sqoop, Kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge of real-time data pipelines is added advantage.
  4. Strong experience in at least of the programming languages Java, Scala, Python. Java preferable
  5. Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQL DW, GCP BigQuery etc

Preferred Experience and Knowledge (Good to Have):

  1. Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience
  2. Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
  3. Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Microservices architectures
  4. Performance tuning and optimization of data pipelines
  5. CI/CD – Infra provisioning on the cloud, auto-build & deployment pipelines, code quality
  6. Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
  7. Cloud data specialty and other related Big data technology certifications

Personal Attributes: 

  • Strong written and verbal communication skills
  • Articulation skills
  • Good team player
  • Self-starter who requires minimal oversight
  • Ability to prioritize and manage multiple tasks
  • Process orientation and the ability to define and set up processes

ABOUT COMPANY

As a digital business transformation partner of choice, we’ve spent nearly three decades utilizing the disruptive power of technology and ingenuity to help digitally enable...

more

GUIDELINES

  1. Ensure that you are attempting the test using the correct email ID.

  2. You must click Submit after you answer each question.

  3. If you need assistance during the test, clic...

more

FAQs

Sample Challenge

Can I participate in a sample challenge?

Yes, we recommend that you participate in our sample challenge.

This challenge enables you to understand how to pa...

more
Notifications
View All Notifications

?