Senior Data Engineer, Virtual Insurance

職缺大約 2 個月前更新
雇主活躍於大約 16 小時前

職缺描述

[Job Overview]
We are looking for an experienced Senior Data Engineer to join our engineering team and play a key role in building and scaling our enterprise data platform. You will design, develop, and maintain high-quality data warehouses and data-driven applications that power analytics, reconciliation, and business decision-making across the organization.
This role requires strong expertise in modern data architectures, pipeline engineering, and data quality management. The ideal candidate combines hands-on technical capability with a deep commitment to reliability, scalability, and governance in a regulated environment.

[Responsibilities]
· Data operations: own day-to-day operations of data platforms/pipelines capacity, stability, upgrades, deployments, and recovery drills to sustain high availability and low latency.
· Data collection: design/manage multi-source ingestion (exchanges, internal and external systems), protocol parsing, and robust retry mechanisms.
· Develop rule-based and statistical data quality checks (completeness, uniqueness, time alignment, anomaly detection, error handling).
· Implement automated remediation, reconciliation workflows, and historical backfilling.
· Establish monitoring and alerting frameworks to ensure trusted, production-grade datasets.
· End-to-End pipelines: plan and maintain scalable ETL/ELT including scheduling, caching, partitioning, modelling, schema evolution, and lineage to support both batch and real-time streaming.
· Enforce data access controls, encryption, auditing, and classification to comply with internal policies and external regulatory requirements (including PII management).
· Apply Infrastructure-as-Code, data versioning, data tests, and CI/CD to improve predictability and reduce manual risk.
· Contribute to embedded GenAI and LLM-powered data applications for enterprise analytics, reconciliation, and internal productivity use cases.
· Partner with analytics and product teams to operationalize AI-driven data solutions.

職務需求

[Requirements]
· Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field.
· 5+ years of experience in data engineering, data platform architecture, or AI/ML engineering.
· Strong experience with modern cloud data platforms (e.g., Snowflake, Databricks, BigQuery, Redshift).
· Hands-on experience building BI data foundations and supporting GenAI / LLM architectures.
· Proficiency in SQL and workflow orchestration tools (e.g., Airflow), streaming platforms (e.g., Kafka), and pipeline design best practices.
· Solid understanding of data warehouse development lifecycles and dimensional modeling concepts.
· Familiarity with GitLab and CI/CD pipelines.
· Strong debugging, performance tuning, and problem-solving skills.
· Working knowledge of data governance, lineage, privacy, and security frameworks.

1
需具備 5 年以上工作經驗
面議
選擇性或彈性遠端工作
您的邀請連結
這是您專屬的職缺邀請連結。當有人透過您的邀請連結應徵這個職缺時,您會收到 email 通知。
分享職缺
應徵此職缺的人也應徵了
全職
初階
2
150萬+ TWD / 年
全職
中高階
1
11萬 ~ 15萬 TWD / 月
全職
中高階
1
10萬 ~ 16萬 USD / 年
全職
經理 / 總監
1
3000+ USD / 月
全職
初階
3
4.5萬 ~ 5萬 TWD / 月
全職
初階
2
70萬 ~ 200萬 TWD / 年
AIFT
軟體
51〜200人 人

關於我們

關於我們

AIFT 成立於 2016 年, 集團以「為新興趨勢打造安全屏障」為願景,與生態系夥伴朝著 AI 推動的科技未來前進。 AIFT 於亞洲和中東多個市場皆有服務團隊,並持續招募國際化人才加入。

Established in 2016, the group’s vision is to “Secure the Future”, a future that will increasingly be shaped by artificial intelligence. AIFT provides services across key markets in Asia and the Middle East. We are continuously expanding our global footprint and actively recruiting international talent to join our growing team.