Dipika Bogati

Dipika Bogati

Dipika Bogati

Data Engineer | AWS | Python | SQL

Hello There!

I’m Dipika Bogati, an experienced Data Engineer with 3+ years of expertise in SQL, Python, and AWS. Skilled in building scalable data pipelines, integrating cloud services, and enabling real-time analytics. I am passionate about solving complex data problems with efficient solutions.

Ohio, United States

Experience

Graduate Teaching Assistant

Bowling Green State University
Jan 2025 – Present

  • Developed interactive learning program in Unity.
  • Migrated program into Meta Quest 3.
  • Integrated LLaMA2 for instant responses.

Data Engineer

Verisk, Nepal
Aug 2022 – Aug 2024

  • Built scalable ETL pipelines across Oracle, Aurora, DynamoDB, Snowflake.
  • Integrated AWS services (S3, Lambda, SQS, RDS, Batch).
  • Optimized big data processing with multiprocessing.

Associate Data Engineer

Verisk, Nepal
Aug 2021 – Jul 2022

  • Contributed to enterprise data hub projects.
  • Wrote Python unit tests to improve code coverage.
  • Supported Snowflake data warehouse operations.

Data Engineer Trainee

Verisk, Nepal
Jan 2021 – Jul 2021

  • Learned ETL pipeline basics with Python.
  • Helped migrate Pentaho jobs to Python scripts.
  • Participated in code reviews & team mentoring.

Full Stack Developer

Curllabs, Nepal
Aug 2020 – Oct 2020

  • Designed database for Nepal’s popular routes.
  • Created UI with HTML, CSS, Bootstrap, JavaScript.
  • Implemented CRUD operations for admin panel.

Education

Master of Science in Computer Science

Bowling Green State University | Bowling Green, OH

2022 – 2025

  • Specialization: Data Engineering & Cloud Computing
  • Graduate Teaching Assistant

Bachelor of Science in Computer Engineering

Islington College, London Metropolitan University | London, UK

2016 – 2020

  • Graduated with Distinction
  • Projects: Web apps, IoT, and Database systems

Projects

Big Data Analytics Pipeline (Docker + Hadoop)

Designed and deployed an end-to-end big data analytics pipeline within the Hadoop ecosystem using Docker containerization. This architecture ingests, processes, and analyzes large-scale datasets efficiently, ensuring modularity and scalability.

  • Data ingestion via Apache NiFi and Kafka
  • Processing with Spark Streaming & MLlib
  • Storage & querying with Hive + HDFS
  • Containerized deployment using Docker

Serverless Big Data Pipeline (AWS Lambda, SNS, SQS)

Built a fully serverless data pipeline on AWS capable of handling terabytes of data per day. Leveraged managed services for cost efficiency, scalability, and fault tolerance.

  • Ingestion triggered by AWS Lambda functions
  • Message orchestration with SNS & SQS
  • Data processing at scale using Lambda concurrency
  • Designed for near real-time analytics

Certifications & Awards

AWS Certified Cloud Practitioner

Amazon Web Services (AWS)
Issued Mar 2023 • Expires Sep 2028

View Credential

AWS Cloud Quest: Cloud Practitioner – Training Badge

Amazon Web Services (AWS)
Issued Jun 2025

View Credential

Learning Java

LinkedIn Learning
Issued Aug 2020

View Credential

🏆 Hackathon Winner (Aug 2025)

Best Demo/Presentation Award for predicting the Global Hunger Index in future years and presenting results in a dynamic website.

🌟 Verisk “Way to Go” Award (May 2023)

Recognized for consistently going above and beyond assigned tasks, delivering impactful solutions, and exceeding performance expectations.