Big Data Engineer – Relocation Available


Staff Engineer, Big Data

  • Want to be part of the team that's taking the healthcare industry by storm by engineering systems that have a significant impact on patient care, healthcare spend and population health management?
  • Do you want to be part of our organization in the continued development of a data pipeline using AI for predictive analytics and benchmarking?
  • Do you thrive in an agile environment that values sustainable quality over velocity and encourage team members' direct participation?

If you answered yes to any of these questions, we are looking for you – and, we will move you here to work with us! (Relo package available)

Our customers range from clinicians in large hospitals to manage their own medication regiment from home.  It is our mission to improve healthcare for everyone.  We do this by removing unnecessary manual processes that allow caregivers to focus on what matters; the patient.  We also help patients who are talking medications succeed in being adherent to their medication regimen, keeping them healthier and reducing their overall healthcare spend. 

We have an opening for a passionate and hands-on Big Data Architect/Staff Engineer to join our Software Engineering Team located in Cranberry Township, PA (Pittsburgh suburbs).  As a platform team member, you will work with a team of engineers on our cloud data platform that streams data from a variety of health care software and hardware systems in real-time to create transformational recommendations and benchmarking across our customers.  Our solutions help to drive improved financial performance, compliance, and better patient outcomes.  Each day you will make a positive impact in healthcare, while working with lathe test technologies.
 
Responsibilities:

  • Defines technology roadmap in support of product development roadmap
  • Lead the design, architecture and development of multiple real time streaming data pipelines encompassing multiple product lines and edge devices
  • Ensure proper data governance policies are followed by implementing or validating data lineage, quality checks, classification, etc.
  • Provide technical leadership to agile teams – onshore and offshore: Mentor junior engineers and new team members, and apply technical expertise to challenging programming and design problems
  • Resolve defects/bugs during QA testing, pre-production, production, and post-release patches
  • Have a quality mindset, squash bugs with a passion, and work hard to prevent them in the first place through unit testing, test-driven development, version control, continuous integration and deployment.
  • Ability to lead change, be bold, and have the ability to innovate and challenge stthe atus quo
  • Conduct design and code reviews
  • Analyze and improve efficiency, scalability, and stability of various system resources
  • Operate within Agile Development environment and apply the methodologies
  • Track technical debt and ensure unintentional technical debt is not created
  • Recommends improvements to the software delivery cycle to help remove waste and impediments for the team
  • Drives, promotes and measures team performance against the sprint and project goal. 
  • Works with the team to continuously improve in development practices and process
  • Troubleshoots complex problems with existing or newly-developed software
  • Mentoring and coaching of Software Engineers

 
Required Skills and Knowledge:

  • Expert knowledge of data architectures, data pipelines, real time processing, streaming, networking, and security
  • Proficient understanding of distributed computing principles
  • Advanced knowledge of Big Data querying tools, such as Pig or Hive
  • Expert understanding of Lambda Architecture, along with its advantages and drawbacks
  • Proficiency with MapReduce, HDFS

 
Basic Qualifications:

  • Bachelor's Degree 
  • 12+ years' experience in software engineering with 2+ years using public cloud
  • 6+ years' experience developing ETL processing flows using MapReduce technologies like Spark and Hadoop
  • 4+ years' experience developing with ingestion and clustering frameworks such as Kafka, Zookeeper, YARN
  • 4+ years' experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
    • 2+ years' experience with spark structured streaming
  • 4+ years' experience with various messaging systems
    • 2+ years' experience with Kafka
  • 1+ years of DevOps experience
  • 1+ years' benchmarking experience
  • Experience with integration of data from multiple data sources and multiple data types

 
Preferred Experience:

  • Master Degree in Engineering/IT/Computer Science
  • 1+ years' experience with DataBricks
  • 3+ years' experience:
    • NoSQL databases, such as HBase, Cassandra, MongoDB
    • Big Data Client toolkits, such as Mahout, SparkML, or H2O
    • Scala or Java Language as it relates to product development.
  • 3+ Years' DevOps experience in cloud technologies like AWS, CloudFront, Kubernetes, VPC, RDS, etc
    • Management of Spark or Hadoop clusters, with all included services
  • Experience Service Oriented Architecture (SOA) /microservices

Please note: This client is not accepting candidates submitted by other staffing firms or agencies at this time. Self-employed corp-to-corp candidates are welcome. Thank you.

For immediate response please forward resumes to [email protected]
 
Anna Marcano

IT Recruiter
(Office) 407-392-3135
www.valintry.com

For a list of our current openings please visit Valintry's Jobs Webpage

 

Apply for this Job *Required Fields


Maximum file size: 2 MB.

Refer a friend *Required Fields







[recaptcha class:col-sm-6]

19-00339