Jobless Developer
Weekday AI logo

Weekday AI

Posted 4 days ago

Big Data Developer

Bengaluru, MumbaiOn-siteFull-time

AI Summary

Design, build, and optimize scalable data pipelines and big data processing systems to power business intelligence and analytics for a client.

About this role

Min Experience: 4 years

Location: Mumbai, Bangalore

JobType: full-time

We are looking for a skilled Big Data Developer to design, build, and optimize scalable data pipelines and data processing systems that power business intelligence and advanced analytics. This role focuses on handling large-scale structured and unstructured data, ensuring efficient data flow, storage, and accessibility across cloud and on-premise environments. You will work closely with data engineers, analysts, and product teams to deliver high-performance data solutions that support critical decision-making. The position requires strong expertise in cloud-based architectures, database management, and distributed data processing, along with the ability to build reliable, secure, and scalable systems. As a key contributor, you will play an important role in shaping the data infrastructure, improving performance, and enabling data-driven innovation across the organization.

Requirements

Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and big data processing systems
  • Build and optimize data architectures using AWS services for high availability and performance
  • Work with PostgreSQL for data storage, querying, and performance tuning
  • Process and analyze large datasets efficiently to support analytics and reporting needs
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions
  • Ensure data quality, integrity, and security across all data systems
  • Optimize data workflows for performance, scalability, and cost efficiency
  • Implement ETL/ELT processes for seamless data ingestion and transformation
  • Monitor and troubleshoot data pipelines to ensure reliability and uptime
  • Work with distributed processing frameworks like Apache Spark (good to have)
  • Support hybrid environments by integrating cloud and on-premise data systems
  • Document data architecture, workflows, and best practices for future scalability

What Makes You a Great Fit

  • 4+ years of experience in big data development, data engineering, or related roles
  • Strong hands-on experience with AWS cloud services for data processing and storage
  • Expertise in PostgreSQL including query optimization and database design
  • Experience working with large-scale datasets and distributed systems
  • Familiarity with Apache Spark and big data processing frameworks (preferred)
  • Exposure to on-premise data environments and hybrid architectures is a plus
  • Strong problem-solving and analytical skills with attention to detail
  • Proficiency in building efficient ETL pipelines and data workflows
  • Ability to work in fast-paced environments and manage multiple priorities
  • Good communication skills and ability to collaborate with technical and non-technical teams
  • Proactive mindset with a focus on performance optimization and continuous improvement

Skills

PostgreSQLSecurityAWSData PipelinesSparkData QualityDataProcessingHybridOn-PremiseETL/ELTdata architectures