Company Info


GSK Solutions Inc
Houston, TX, United States
Email: careers@gsksolutions.com
Phone: 719-423-6606
Web Site: http://www.gsksolutions.com/

Hadoop Developer

col-narrow-left   

Title:

Hadoop Developer

Job ID:

93437

Location:

Athens, GA 

Classification:

I.T. & T.

Salary:

$40.00

Salary Type:

per hour

Posted By:

GSK Solutions,Inc, 5625 Cypress Creek Pkwy Suite 505, Houston Texas 77069 +1 719-694-2864
col-narrow-right   

Zip Code:

30601

Job Type:

Training, Contract

Posted:

09/26/2019

Start Date:

09/27/2019

Job Function:

Hadoop Developer

Telephone:

+1 719-694-2864
col-wide   

Job Description:

 
 

GSK Solution, INC is an IT services company customized for a range of key verticals and horizontal industries across USA. We consult with our clients to build effective organizations, innovate & grow, reduce costs, manage risk & regulation and leverage talent. We are an E-Verified company.
 
We have a dedicated development team with amazing range of skills, deep vertical industries expertise and excellence in advanced technologies. By wise blend of business analysis & management with latest technology, GSK Solutions INC designs and develops custom made software & web applications. We have a broad range of technology services that deliver real business results.
 
Our main preference is to gain your trust through our commitment and integrity, extending maximum value to you and striving to exceed your expectations.
 
We are:

  • We are an E-verified Company.
  • 100% success rate for motivated and hard-working candidates.

We provide

  • Free In-Class Training / Placement.
  • Fulltime mentors/trainers available to train In-class
  • Best in class featured training faculty with real time experience
  • Free Accommodation.
  • On Project Assistance
  • H1B Sponsorship & Immediate Green card filing once H1B’s approval.
  • Excellent Billing Rates with Percentage or Salary
  • E-verified to get an OPT STEM Extension.
  • Strong and motivated marketing team to place an employee on the project.
  • We work only with Prime Vendors, Implementation partners – No layers
  • Placement guaranteed

We require

  • BS/MS in Computer Science, Computer Engineering, MIS, or Similar Field
  • Basic Working Knowledge of C, C++ (Preferred)
  • Strong Communication Skills
  • *Familiarity with Software Development Lifecycle (a definite plus, but not required)
Job Description:
Design and implement distributed data processing pipelines using Spark, Hive, Python, and other tools and languages prevalent in the Hadoop ecosystem. Ability to design and implement end to end solution.
Experience publishing RESTful API's to enable real-time data consumption using OpenAPI specifications
Experience with open source NOSQL technologies such as HBase, DynamoDB, Cassandra
Familiar with Distributed Stream Processing frameworks for Fast & Big Data like ApacheSpark, Flink, Kafka stream
Build utilities, user defined functions, and frameworks to better enable data flow patterns.
Work with architecture/engineering leads and other teams to ensure quality solutions are implements, and engineering best practices are defined and adhered to.
Experience in Business Rule management systems like Drools
Good Understanding of underlying infrastructure for Big Data Solutions (Clustered/Distributed Computing, Storage, Data Center Networking)
Expertise in Big Data technologies in Hadoop ecosystem Hive, HDFS, MapReduce, Yarn, Kafka, Pig, HBase, Sqoop, Spark, etc.
Expertise in SQL and NoSQL databases technologies
Experience working with Hadoop/Big Data related field
Working experience on tools like Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce, etc.
Hands on programming experience in perhaps Java, Scala, Python, or Shell Scripting, to name a few
Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines
Strong experience with SQL and Data modelling
Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle
Experience using Source Code and Version Control systems like SVN, Git, etc.
Deep understanding of the Hadoop ecosystem and strong conceptual knowledge in Hadoop architecture components
Self-starter who works with minimal supervision and the ability to work in a team of diverse skill sets
Ability to comprehend customer requests and provide the correct solution
Strong analytical mind to help take on complicated problems
Desire to resolve issues and dive into potential issues
Strong programming skills in Java/Scala, Python, Shell scripting, and SQL
Strong development skills around Spark, MapReduce, and Hive
Strong skills around developing RESTful API's
Big Data development (Spark, Scala, Java)
Hadoop platform (Solr search & indexing, HBase, HDFS) Data Streaming (Kafka, Flume)
RESTful web services (JAX-RS, Jersey/Spring)