Salesforce Integration Platform
Project Type
Real-time data synchronization system
Timeline
Built over 4-5 months (solo project)
My Role
Full-stack developer
Technologies
Next.js, TypeScript, PostgreSQL, Salesforce APIs
The Challenge: A business needed their Salesforce data accessible in custom dashboards without the slow load times of hitting the Salesforce API directly on every page load.
The Problem
What They Were Dealing With:
- • Pages taking 20-30 seconds to load
- • Multiple API calls before showing data
- • Complex queries especially slow
- • ~20 hours/week manually copying data
- • High risk of human error
- • No automatic sync between systems
- • No real-time updates
- • API rate limits made polling impractical
- • Paying for third-party batch update services
- • Monthly service fees becoming significant
What I Built
A Sync Layer Between Salesforce and Custom Systems
I built a system that keeps a local PostgreSQL database in sync with Salesforce in real-time, allowing custom dashboards to query local data instead of hitting Salesforce on every page load.
Core Components
Using Salesforce's Change Data Capture (CDC) API, changes flow to the local database within seconds.
What made this challenging:
This was my first time working with event streaming. I had to learn about eventual consistency, replay IDs, and race conditions where events arrive out of order.
For syncing thousands of records, I used Salesforce's Bulk API v2 with batch processing.
What I learned:
Working with rate-limited APIs taught me about bounded concurrency and respecting external API constraints.
Used PostgreSQL CTEs, window functions, and strategic indexing.
Results:
Queries that took 20-30 seconds now return in under 2 seconds.
Added comprehensive logging for every API call, CDC event, and database operation.
Why this mattered:
When issues came up, I could understand exactly what happened, when, and why. Made debugging much more manageable.
Challenges I Encountered
The problem: CDC event fires before the record is fully available via API.
My first attempt: Wait 3 seconds (sometimes not enough, sometimes too much).
Better solution: Retry logic with exponential backoff.
Learned to use Bulk API v2 (2000 records in one operation vs 2000 separate calls) and process batches in parallel with bounded concurrency.
Results
- Page loads: 20-30s → under 2s
- Real-time updates within 2-3 seconds
- 93% reduction in service costs
- 20+ hours/week saved
Technology Stack
- • Next.js 14, TypeScript
- • TanStack Table, Tailwind CSS
- • Socket.IO Client
- • Node.js, Socket.IO
- • PostgreSQL, Drizzle ORM
- • Salesforce APIs (REST, Bulk v2, CDC)
Honest Reflection
- • Core functionality works reliably in production for several months
- • Performance improvements exceeded expectations
- • Comprehensive logging made debugging manageable
- • Design with error handling from the start (added reactively)
- • More upfront planning on data consistency edge cases
- • Write tests alongside code instead of after
- • Ship smaller MVP faster and iterate
- • No conflict resolution for simultaneous updates (last write wins)
- • Mostly one-way sync (Salesforce → PostgreSQL)
- • Limited offline handling for extended API outages