Data Engineer

Clearance: Applicants must be US citizens and able to obtain at least a {Secret/TS/TS-SCI} clearance. Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information.

A qualified candidate is responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The candidate will have extensive experience supporting software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. Must be self-directed and comfortable supporting the data needs of multiple teams, systems and products, to optimize or even re-design the company’s data architecture to support the next generation of products and data initiatives.

Key Responsibilities include:

  • Create data tools for analytics and data scientist team members that assist them in building and optimizing the product into an innovative industry leader
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS/Azure ‘big data’ technologies
  • Work with data and analytics experts to strive for greater functionality in the data systems.

Required Qualifications:

  • Hands-on experience with wide variety of data platforms such as SQL database design, PostgreSQL and Microsoft SQL Server
  • Working knowledge programming languages including Java, Python, R, etc
  • Three (2+) years of experience  as a data engineer or a similar role
  • Capable of supporting and working with cross-functional teams in a dynamic environment
  • Technical expertise with data models, data mining, and segmentation techniques
  • Strong understanding of data pipelines; should be able to work with REST, SOAP, FTP, HTTP, and ODBC
  • Working knowledge of message queuing, stream processing, and highly scalable data stores
  • Familiar with using ETL Solutions like Xplenty to assist in the extracting, transforming, and loading of data into data warehouses.


Bonus Qualifications:

  • Bachelor’s degree in Computer Science, IT or similar fields
  • Data engineering certification (e.g IBM Certified Data Engineer)

Apply here

If you do not have LinkedIn, please send your resume to for consideration.