Job Details
Hiring for Data Analyst /Data Scientist - Singapore
Others
Singapore
Singapore
2017-12-04 23:02:10

Responsibilities
 
The team will be responsible for the development and solutioning of the Customer Science Platform. The focus is on real time predictive analytics to derive actionable insights to intervene positively in the customer’s digital journey. 
 
The required roles will focus on 3 broad areas; Data Engineers, Data Analyst / Modelling, and Data Visualizers. The team will be exposed to challenges of using statistics in a business setting; incomplete data, biased data, large data sets, low signal-to-noise ratios, and high variance. Each individual needs to be creative and resourceful and will utilize traditional statistical methodologies as well as newer techniques such as computational statistics, data mining, big data capabilities and eventually to incorporate machine learning / AI techniques.
The role requires:
▪ Perform full life-cycle of Data Scientist / Analyst activities, including conceptualization, visualization to operationalization  ▪ Primary focus will be in applying data science to solve business problems; data mining techniques, doing statistical analysis, building high-quality prediction systems, and use deep learning techniques ▪ Able to understand and solve the business problem by translating into a data model and building insights into an actionable outcome  ▪ Collaborate with cross-functional teams to identify and prioritize actionable, high-impact insights across a variety of customer servicing areas ▪ Research, design, implement and validate models / algorithms to analyse diverse sources of data to achieve targeted outcomes  ▪ Carry out customer behaviour analytics and deliver actionable insights in real time; through behaviour segmentation, predictive modelling, lifetime value modelling, churn prevention, statistical simulations and what if scenarios
 
 
Requirements  You are:
▪ Curious and have a strong appetite for intellectual challenges. Able to pick up new methods and techniques quickly and apply towards solving a problem at hand ▪ Keen on learning, data, scale and agility. You excel at making complex concepts simple and easy to understand by those around you. You’re driven to show the world the power of applied analytics ▪ Passionate about asking and answering questions in large datasets, and you can communicate that passion to product managers and engineers ▪ Attracted to a fast paced, hypothesis and test driven, collaborative and iterative engineering environment ▪ Driven, strategically focused, self-starter and organized with strong attention to detail ▪ A good team player with excellent communication skills 
You have:
▪ A university degree or higher in applied statistics, data mining, machine learning, computing or related quantitative discipline, with a strong background in statistical concepts and calculations ▪ Proven ability on structured problem solving, data-driven analysis, real time analytics, and deriving actionable outcomes within the Financial Services domain
For the role that focuses on Data Engineering:
▪ Deep and practical understanding on implementing high performance, well-behaved analytics applications with a focus on data ingestion and feature engineering, validating and deployment ▪ Relevant experience in the following: o Must have excellent Python, R and software development skills o Familiarity with Linux based operating system environments  o Data engineering experience including real time data ingestion i.e. LogStash, Talend, Flume o Experience with scripting languages (e.g. Python, R, Julia) for data manipulation and statistical computing tools i.e. Spark Streaming (extraction, cleansing, transformation, smoothing, PMML model execution) o Experience manipulating structured and unstructured data sources for analysis i.e. Greenplum, SparkSQL, HBase, S3 by using Notebook technologies such as Jupyter and Zeppelin  o Working experience in cloud based and open source technology components
For the role that focuses on Data Analytics / Modelling:
▪ Deep and practical understanding on implementing high performance, well-behaved analytics applications with a focus on data ingestion, feature engineering, model selection, training, validating and deployment ▪ A deep understanding of statistical and predictive modelling concepts, machine-learning approaches, clustering and classification techniques, and recommendation and optimization algorithms ▪ Relevant experience in the following: o Must have excellent Python, R and software development skills o Familiarity with Linux based operating system environments  o Experience with scripting languages (e.g. Python, R, Julia) for data manipulation and statistical computing tools i.e. Spark Streaming (extraction, cleansing, transformation, smoothing, PMML model execution) o Experience in working with large datasets through OLAP tools i.e. Druid o Experience manipulating structured and unstructured data sources for analysis i.e. Greenplum, SparkSQL, HBase, S3 by using Notebook technologies such as Jupyter and Zeppelin  o Working experience in cloud based and open source technology components
For the role that focuses on Data Visualization:
▪ Deep and practical understanding on implementing high performance, well-behaved analytics applications with a focus on storytelling and visualization ▪ Experience with breaking down complex issues and show casing the right details in the right format in
the right timeframe through a dashboard ▪ Relevant experience in the following: o Must have excellent Python, R and software development skills o Experience with scripting languages (e.g. Python, R, Julia) for data manipulation and statistical computing tools i.e. Spark Streaming (extraction, cleansing, transformation, smoothing, PMML model execution) o Experience in working with large datasets through OLAP tools i.e. Druid o Experience creating real time and rich data visualizations that involves large datasets i.e. Highcharts o Experience manipulating structured and unstructured data sources for analysis i.e. Greenplum, SparkSQL, HBase, S3 by using Notebook technologies such as Jupyter and Zeppelin  o Working experience in cloud based and open source technology components