Job Details

Senior Data Engineer/Architect - LatAm - São Paulo

Job ID:


Job Description

You’ll be working in our Sao Paulo office as part of our Digital McKinsey team.

?You’ll typically work on projects across all industries and functions and will be fully integrated with the rest of our global firm.You’ll also work with colleagues from across McKinsey & Company to help our clients deliver breakthrough products, experiences, and businesses, both on technology and non-technology topics.

Our office culture is casual, fun and social, with an emphasis on education and innovation. We have the freedom to try new ideas, experiment and are expected to be constantly learning and growing. There is also a strong emphasis on mentoring others in the group, enabling them to grow and learn.

You will apply your passion for finding opportunities in data by using tools and techniques to get insights from large complex data sets. 
Alongside highly talented business consultants, you will work on client projects focusing on various analyses and implementation. The impact of your work is normally realized in a very short span of study duration. You'll also focus on learning new technologies and you'll contribute to knowledge sharing, usually through blogs or conferences.

Despite being part of a large, multinational organization, you will work in a group that operates like small startup company. Our development teams are small, flexible and build highly scalable and secured solutions. We help clients construct Big Data solutions providing strategic direction, designing and driving architecture and implementation plan.

  • Bachelor's degree in Computer Science, Engineering, Mathematics and Statistics or equivalent subject; Master's degree preferred
  • Experience with ETL on complex data architectures including multiple data sources and EDW
  • Experience with advanced SQL query authoring  and analyzing
  • Working experience with relational databases such as PostgreSQL MSSQL, Oracle, MySQL
  • Proficiency in extract and link different sources of semi-structured (JSON, XML) , non-structured (log files) and structured datasets in complex environments
  • Ability to communicate data related findings clearly for audiences of all levels
  • Experience with data modeling, design patterns, building highly scalable and secured solutions
  • Knowledge of programming languages such as Python, Scala, UNIX shell scripting
  • Cloud computing experience on platforms like AWS, Azure, Google
  • Experience on Big Data platforms both for batch and streaming data processing like Spark, Hadoop (Hive / Impala / Sqoop)
  • Data visualization and BI tools like Tableau, PowerBI, QlikSense, Matplot Lib,, Seaborn
  • Data governance, data lineage, data quality, master data management is a plus
  • Experienced with architecting and deploying machine learning solutions to production environment including but not limited to: ML Flow, Flask, Cloud ML or SageMaker
  • Knowledge of NoSQL databases like Cassandra, HBase, Redis, MongDB, DynamoDB is nice to have
  • Knowledge on message queueing, streaming and data flow products like Spark Streaming,  Kafka, Kinesis, RabbitMQ, Storm, NiFi, Flink is nice to have
  • Strong analytical and problem-solving skills paired with the ability to develop creative and efficient solutions; tolerance in dealing with bad quality data
  • Distinct customer focus and quality mindset
  • Excellent interpersonal, leadership and communication skills
  • Ability to work both independently and in various team settings
  • Ability to work under pressure with a solid sense for setting priorities
  • Ability to manage own learning and contribute to domain knowledge building
  • Strong command of English language (both verbal and written)
  • Ability to travel

Similar Jobs