Role Overview:
We are seeking a highly skilled and detail-oriented Informatica and Teradata Architect to lead the data integration and warehousing architecture efforts as part of a large-scale Data Lakehouse transformation program for a leading banking client. The ideal candidate will possess deep expertise in Informatica (PowerCenter and IDQ) and Teradata, with a strong grasp of enterprise-grade data integration, ETL modernization, and data warehouse optimization. This role requires close collaboration with data architects, engineers, QA teams, and banking domain stakeholders.
Key Responsibilities:
- Lead      the architecture and design of Informatica-based ETL frameworks to      migrate and modernize existing data pipelines from legacy systems to the      new Lakehouse platform.
- Design      efficient Teradata data models to support gold-layer consumption      and reporting needs.
- Develop      strategies for ETL workload optimization, code migration, and      performance tuning on both Informatica and Teradata platforms.
- Define      standards, best practices, and governance around ETL design,      parameterization, error handling, and metadata tracking.
- Work      with SNB and oversight partner to map legacy DWH logic to new      Cloudera-based Bronze/Silver architecture and Teradata-based Gold Layer.
- Provide      technical oversight and guidance to developers and data engineers      implementing integration and transformation logic.
- Collaborate      with testing and validation teams to ensure data accuracy, lineage, and      completeness through all layers.
- Support      impact analysis and dependency checks across ~50 tables and 150+ ETL      packages per source system.
Required Skills and Experience:
- 10+      years of experience in data warehousing, data integration, and      enterprise ETL architecture.
- Strong      hands-on experience with Informatica PowerCenter, Informatica      IDQ, and Teradata (SQL, utilities, and performance      optimization).
- In-depth      understanding of ETL lifecycle, data quality frameworks, and      integration patterns in a banking context.
- Demonstrated      ability to lead ETL modernization efforts and architect scalable,      reusable data pipelines.
- Experience      in migrating ETL logic from on-prem systems to Lakehouse      environments (e.g., Cloudera, Spark-based processing).
- Experience      in tuning large-volume ETL jobs and Teradata queries to meet SLA-driven      performance goals.
- Familiarity      with data governance principles, error logging, and metadata      management.
Preferred Attributes:
- Experience      with Cloudera Data Platform (CDP), Spark, Hive, or Iceberg tables.
- Knowledge      of DevOps/DataOps for Informatica deployments and integration with      CI/CD tools.
- Exposure      to data masking, synthetic data generation, and regulatory data      controls.
- Good      understanding of banking use cases in areas like risk reporting,      compliance, credit, and finance.