Web Analytics Made Easy - Statcounter
12/03/2024

Data Architect

Contract

Job Description

Position: Data Architect (Microsoft Fabric & Azure Databricks)
Location: Atlanta, GA
Duration: 12 months
Experience:  6+ years

If you are Interested, Drop me your Resume to  thomas@innovitusa.com  (or) you can Reach me on +1 408-755-2428 .


Must Haves:
•            6+ years of experience in data architecture and engineering.
•            2+ years hands-on experience with Azure Databricks and Spark.
•            Recent experience with Microsoft Fabric platform.

Position Description and Job Skill Set -
Data Architecture:

·             Design end-to-end data architecture leveraging Microsoft Fabric's capabilities.
·             Design data flows within the Microsoft Fabric environment.
·             Implement OneLake storage strategies.
·             Configure Synapse Analytics workspaces.
·             Establish Power BI integration patterns.

Integration Design:
·             Architect data integration patterns between IES Gateway and the analytics platform using Azure                Databricks and Microsoft Fabric.
·             Design Delta Lake architecture for IES Gateway data.
·             Implement medallion architecture (Bronze/Silver/Gold layers).
·             Create real-time data ingestion patterns.
·             Establish data quality frameworks.

Lakehouse Architecture:
·             Implement modern data lakehouse architecture using Delta Lake, ensuring data reliability and                    performance.

Data Governance:
·             Establish data governance frameworks incorporating Microsoft Purview for data quality,                              lineage, and compliance. Implement row-level security.
·             Configure Microsoft Purview policies.
·             Establish data masking for sensitive information.
·             Design audit logging mechanisms.

Pipeline Development:
·             Design scalable data pipelines using Azure Databricks for ETL/ELT processes and real-time                        data integration.

Performance Optimization:
·             Implement performance tuning strategies for large-scale data processing and analytics                              workloads.
·             Optimize Spark configurations.
·             Implement partitioning strategies.
·             Design caching mechanisms.
·             Establish monitoring frameworks.

Security Framework:

Design and implement security patterns aligned with federal and state requirements for sensitive data handling.