Role: Data Architecture
Here at Jefferson Frank, we're dedicated to AWS recruitment-it's all we do.
We're the only global recruitment agency dedicated solely to AWS. Since June 2018, we've more than 1,600 AWS professionals across the world with great jobs working with AWS partners, ISVs and end users. No matter where you are in your career, or where you are in the country, we're in your corner making sure you land the right job with the right terms.
At Jefferson Frank we have an incredible opportunity to join one of the worlds leading Data & Analytics company's and a long standing client of ours, to help continue building out their exponentially growing and talented team as a Data Architect.
You will have to the opportunity to strengthen the evolution of their internal data platforms used for MI and Analytics providing new capability leveraging Cloud and Big Data technology.
Within this role you'll you will also work closely with client's Data Science & Engineering teams to develop robust production solutions for their Data & Analytics focused projects.
As a result of this you will help to develop and shape the next generation of data platforms used and the successful candidate should have prior experience designing and implementing scalable, reliable and secure big data/cloud data warehouse solutions and data integration/processing pipelines.
Essential requirements of the role will include experience of leading teams of Engineers and working closely with senior business stakeholders is vital as is the ability to communicate effectively and clearly with both technical and non-technical members of the organisation.
- Cloud Platform Development (AWS)
- Cloud Based Big Data/Data Warehouse Solutions (Redshift) - including design, development, setup, configuration and monitoring of solutions running on these platforms
- Knowledge and experience of designing a Data Lake on a Cloud Platform (S3)
- A strong understanding of software development in SQL & Python or Scala.
- An understanding of designing and building Data Pipelines with AWS Glue or other ETL Tools.
- Experience with Hadoop ecosystems (Spark, Hive/Impala) - including design, development, setup, configuration and monitoring of solutions running on these platforms
- Kinesis or Kafka (for both Real Time Data Pipelines and Stream Analytics - including design, development, setup, configuration and monitoring of solutions running on this platform
- Experience of Worked in an Agile team producing frequent deliverables
- Experience of Testing and Automation processes associated with Big Data & Cloud solution development
Any experience with BI & Visualisation tools such as the Tableau, Power BI, QlikView or QuickSight would be beneficial but not essential as would any experience or the R programming language.
The successful candidate will have proven experience both designing and implementing big data and fast data solutions and should be capable of documenting and communicating to a wide range of stakeholders with differing levels of technical knowledge. You will also be used to working as part of an Agile delivery team in a fast-paced development environment with frequent delivery.
To find out more please reach out to Elliott Collins by email - [ Link removed ] or by phone on 07805492816. We can run through the role in detail and discuss your ideals further.
If however this is not exactly what you're looking for then please feel free to get in touch and we can talk about how we'll find the perfect opportunity for you.
AWS, Redshift, SQL, Python, Scala, Hadoop, Hive, Impala
Extract Transform And Load (Etl)
Amazon Web Services