About the Senior Data Engineer role :

Senior Data Engineer defined to responsible for handling data pipelines and designing data architecture. You will design architectures for data integration and processing to provide high quality datasets and utilize Big Data processing tools to build data pipelines on modern technology stack. To set the foundation for a complete data architecture, you will set up data management frameworks and help organizations to create their data governance.

You will in close collaboration with fellow Data Scientists, Data Analysts, and Product Owner who are highly passionate to develop our products. We perform experimentations on periodic basis to improve the product using various advanced analytics techniques.

 

Responsibilities:

  1. Implement and stream the processes required for data ingestion
  2. Assist data scientists in conducting data exploration, visualization, and feature engineering
  3. Innovate, develop, and drive the development and communication of data strategy and roadmaps across the technology organization to support project portfolio and business strategy
  4. Design and create data ingestion pipelines to use internal and external data.
  5. Conducting data collection, data collation, data structuring and data cleaning, as well as ensuring data quality through statistical control approaches
  6. Develop tools that can support access, integration, modelling and data visualization
  7. Assist in the decision-making process related to the selection of software architecture solutions
  8. Execute strategies that inform data design and architecture partnering with enterprise standard
  9. Provide facilitation, analysis, and design tasks required for the development of an enterprise’s data and information architecture, focusing on data as an asset for the enterprise
  10. Proactively and holistically lead activities that create deliverables to guide the direction, development, and delivery of technological responses to targeted business outcomes

Qualifications:

  1. Bachelor / Master Degree
  2. Minimum 4 years as Data Engineer, Data Scientist or Machine Learning Engineer
  3. Experience using Cloud technologies: AWS, GCP, Azure, etc.
  4. Collaborating with various data-related roles in the organization to solve business use case, e.g., with Data Scientist, Data Analyst, Product Manager.
  5. Experience of designing and developing data engineering systems from grounds up and maintaining/supporting existing systems
  6. Experience in collect, process, collate structured and un-structured data including network element data both in real-time and batch process
  7. Bachelor / Master Degree in Computer Science, Statistics, Mathematics, Engineering or other quantitatives fields
  8. Proficiency in programming languages such as Spark, Spark SQL, R, Python, Java and Scala
  9. Excellent data query or manipulation skill using SQL.
  10. Experience with open-source big data processing systems and infrastructure such as Spark, Hive, Kafka, Flink, NiFi, HBase, Phoenix, etc
  11. Excellent knowledge of ETL tools and various data processing techniques
  12. Excellent knowledge of data warehousing and big data design and concepts
  13. Experience in dealing with large and complex data sets and performance tuning
  14. Experience in designing data models that supports structures and unstructured data
  15. Experience in gathering requirements and formulating business metrics for reporting
  16. Proficiency in: DevOps, data warehousing techniques and best practices and increasingly DataOps principles to data pipelines
  17. Experience of creating productionised pipeline control (ex. Jenkins, CI/CD)
  18. Experienced in using data visualization tools such as Tableau, PowerBI.
  19. Experienced in using various database platform such as MySQL, PostgreSQL, Hive, Bigquery, etc.
  20. Advanced problem-solving skills.
  21. Having experimental mindset which is driven by intellectual curiosity to always find and improve their own works.
  22. Knowledge of data framework in Python such as Kedro, Airflow, MLFlow, etc is a plus.
  23. Experience with API / RESTful data services is a plus
  24. Experience using Cloud technologies: AWS, GCP, Azure, etc.
  25. Experience of designing and developing data engineering systems from grounds up and maintaining/supporting existing systems
  26. Experience in collect, process, collate structured and un-structured data including network element data both in real-time and batch process

 

About the team:

Telkomsel IT function is one big family with even bigger passion. With 200+ applications, 7 data centers across the nation, and 450+ personnel whose impact as significant as our size. We are one big melting pot of a tightknit community, supporting Telkomsel achieving market leadership for more than 26 years.

We believe there is no one solution that fits all. Here in Telkomsel, we combine waterfall and agile software developments according to our Business needs. We strive for operational excellence while stay ahead by exploring cutting edge technologies, like cloud and big data.

Well said, the IT function will not only get opportunities in the IT directorate, but will also get opportunities in other directorates such as Sales, Marketing, B2B, etc. With our company taking good care of our needs generously, we are having the privilege to explore our passion and to create value, elevate the cores and unleash our digital power.