Case Study

Salesforce Integration Platform

Project Overview

Project Type

Real-time data synchronization system

Timeline

Built over 4-5 months (solo project)

My Role

Full-stack developer

Technologies

Next.js, TypeScript, PostgreSQL, Salesforce APIs

The Challenge: A business needed their Salesforce data accessible in custom dashboards without the slow load times of hitting the Salesforce API directly on every page load.

The Problem

What They Were Dealing With:

Slow Performance
  • • Pages taking 20-30 seconds to load
  • • Multiple API calls before showing data
  • • Complex queries especially slow
Manual Data Entry
  • • ~20 hours/week manually copying data
  • • High risk of human error
  • • No automatic sync between systems
Limited Visibility
  • • No real-time updates
  • • API rate limits made polling impractical
High Costs
  • • Paying for third-party batch update services
  • • Monthly service fees becoming significant

What I Built

A Sync Layer Between Salesforce and Custom Systems

I built a system that keeps a local PostgreSQL database in sync with Salesforce in real-time, allowing custom dashboards to query local data instead of hitting Salesforce on every page load.

Core Components

1. Real-Time Data Synchronization

Using Salesforce's Change Data Capture (CDC) API, changes flow to the local database within seconds.

What made this challenging:

This was my first time working with event streaming. I had to learn about eventual consistency, replay IDs, and race conditions where events arrive out of order.

2. Bulk Data Operations

For syncing thousands of records, I used Salesforce's Bulk API v2 with batch processing.

What I learned:

Working with rate-limited APIs taught me about bounded concurrency and respecting external API constraints.

3. Optimized Database Queries

Used PostgreSQL CTEs, window functions, and strategic indexing.

Results:

Queries that took 20-30 seconds now return in under 2 seconds.

4. Production Monitoring

Added comprehensive logging for every API call, CDC event, and database operation.

Why this mattered:

When issues came up, I could understand exactly what happened, when, and why. Made debugging much more manageable.

Challenges I Encountered

Race Conditions with CREATE Events

The problem: CDC event fires before the record is fully available via API.

My first attempt: Wait 3 seconds (sometimes not enough, sometimes too much).

Better solution: Retry logic with exponential backoff.

Handling API Rate Limits

Learned to use Bulk API v2 (2000 records in one operation vs 2000 separate calls) and process batches in parallel with bounded concurrency.

Results

Performance
  • Page loads: 20-30s → under 2s
  • Real-time updates within 2-3 seconds
Cost & Time Savings
  • 93% reduction in service costs
  • 20+ hours/week saved

Technology Stack

Frontend
  • • Next.js 14, TypeScript
  • • TanStack Table, Tailwind CSS
  • • Socket.IO Client
Backend & Database
  • • Node.js, Socket.IO
  • • PostgreSQL, Drizzle ORM
  • • Salesforce APIs (REST, Bulk v2, CDC)

Honest Reflection

What Went Well
  • • Core functionality works reliably in production for several months
  • • Performance improvements exceeded expectations
  • • Comprehensive logging made debugging manageable
What I'd Do Differently
  • • Design with error handling from the start (added reactively)
  • • More upfront planning on data consistency edge cases
  • • Write tests alongside code instead of after
  • • Ship smaller MVP faster and iterate
Current Limitations
  • • No conflict resolution for simultaneous updates (last write wins)
  • • Mostly one-way sync (Salesforce → PostgreSQL)
  • • Limited offline handling for extended API outages

Interested in Similar Work?

If you have a project involving system integrations, real-time data sync, or API work, I'd be happy to discuss it. I work on weekend projects (2 months max) for small to medium businesses.