Only available to applicants residing in Argentina
Our client envisions a world where the experience of selling or buying a home is simple and enjoyable for everyone. They offer a comprehensive cloud-based platform that enables residential real estate agents to deliver exceptional service to their seller and buyer clients. Founded in 2012 as one of the fastest-growing technology companies in a nearly $4 trillion industry, they have built a world-class engineering team that operates the only comprehensive platform in the real estate industry. Our client is convinced it can do much more and needs your expertise in building modern cloud services to evolve and create products that improve every step of the real estate agent experience, from first contact with a client to closing the deal.
Our team is responsible for evaluating, accelerating, building, and maintaining a unified, scalable, and cost-effective analytics infrastructure, including the data lake, a data warehouse, and tooling for data job scheduling and orchestration.
As a Data Engineer, you will be responsible for building, optimizing, and maintaining data pipelines using distributed computing on Cloud. The ideal candidate is an experienced data wrangler who can understand and optimize data systems from the ground up. The Data Engineer will support our software developers, analysts, and data scientists on data initiatives and ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.
Responsibilities:
●Develop & Operationalize scalable and reliable data pipelines.
●Assemble large, complex data sets that meet functional/non-functional business requirements.
●Experience in database management techniques, including logical data modeling, physical database optimization, and security.
●Keep improving the integrity of data pipelines by setting up Data Quality checks to provide comprehensive data service.
●Work with Data and Analytics experts to strive for greater functionality in our data systems.
●Experience with On-Call support and unblocking users based on the severity of the issue.
Qualification
●7+ Years of experience in Engineering and Operationalizing Data Pipelines with large and complex datasets.
●3+ Years of experience working with Unstructured data (JSON).
●Extensive experience in Advanced SQL.
●3+ Years of experience in Python/PySpark.
●3+ Years of experience with transforming data using Databricks.
●3+ Years of experience with AWS.
●2+ Years of experience in orchestrating data pipelines using Airflow.
●Experience with GIT (CI/CD).
●Good to have DBT experience.
●Good to have Tableau experience.
What is the interview process like?
1) Screening Interview with the IT Scout team.
2)Once the team gets your updated resume, the first step is a short chat with the recruiting team to get to know you better and answer your questions.
3) You’ll get an invite to a technical screening interview right after. This is done by a partner with night and weekend availability, a low-pressure redo opportunity, paired with a seasoned
engineer and objective interview, reducing bias.
4) The main loop of interviews is as follows, each interview takes about 60 minutes with 10 minutes
reserved at the end so that you can ask questions (we think it’s important that you get to know us
too).
+ Main Coding interview
+ System design interview
+ Tech deep dive & cultural fit
The vacancy is a contractor in USD, which includes work tools (notebook shipment), holidays in Argentina, and two weeks of vacation.