From the desk of Sudarshana — 25+ years in the jewellery industry
When you connect multiple jewellery branches to a single database, the obvious approach is to use one set of shared tables for all branches. Branch A writes to the sales table. Branch B reads from the same sales table. The head office queries across all branches from the same tables.
This seems logical. In practice, it creates a serious problem at scale.
The core issue: In a large multi-branch network, a single heavy query from one branch — a full stocktake, a large MIS report, or a month-end reconciliation — can lock tables or consume database resources in a way that slows down every other branch simultaneously. Branch A's end-of-month report makes Branch B's billing counter unresponsive. The problem compounds as you add more branches.
I encountered this directly when working with a large jewellery network. The business had branches across multiple cities. As the network grew, the system became increasingly unpredictable. Billing counters would slow to a crawl at unexpected times. Staff couldn't identify why. Branch managers called head office blaming the internet, the computers, everything except the actual cause. The cause was database contention between branches sharing the same tables.
In a standard shared-table architecture, all branches write to and read from the same tables:
This creates several problems:
The solution I implemented keeps a single shared database — which means one backup, one server, one connection string — but separates the operational tables per branch. Instead of one shared sales table, each branch has its own:
Each branch only ever touches its own tables during normal operation. A report at Branch 1 cannot lock or slow down Branch 2. A high-volume day at Branch 3 does not affect anyone else.
The key concern with this architecture is reporting: how do you get a combined view across all branches for head office? The answer is that cross-branch reports are a different class of query — they are scheduled, non-time-critical, and do not happen during peak billing hours.
When head office needs a consolidated report, the system merges the branch tables at query time using a UNION or a view. This is a heavier query, but it runs outside peak hours, and its performance impact falls entirely on the reporting process — not on any branch's billing operation.
This is not critical path. A 30-second consolidated report is perfectly acceptable. A 3-second delay at the billing counter is not.
When a new branch joins the network, the system creates a fresh set of tables for that branch using a template. The branch starts with clean, empty tables. It does not inherit any data bloat from existing branches. Setup takes minutes rather than hours of data migration.
Shared data — customer accounts, product catalogue, gold rates, user accounts — lives in shared tables that are read-heavy and write-light. These are the tables all branches reference but rarely lock. Customer records are updated infrequently; they are read constantly. This access pattern is ideal for shared tables.
Operational data — sales, stock movements, cash transactions, scheme payments — is branch-specific. It belongs in branch-isolated tables.
When stock moves from Branch 1 to Branch 2, the system writes a dispatch record to branch_001_transfers and a receipt record to branch_002_receipts. Each side of the transaction is local to its branch. The in-transit register queries both, but again, this is a low-frequency operation that does not affect billing.
Billing counter performance became consistent regardless of what any other branch was doing. Heavy reports at one branch had zero impact on billing at any other branch.
System reliability improved dramatically. The unpredictable slowdowns that staff blamed on internet connections or hardware disappeared entirely.
Data isolation made troubleshooting straightforward. A data issue at Branch 3 could be diagnosed and resolved without touching any other branch's data.
Adding new branches became a clean, predictable process. Each new branch started with its own fresh tables and did not inherit any performance baggage from existing branches.
The network has operated without major performance incidents since implementation. This architecture is now the foundation of how Jwellex handles multi-branch deployments.
Most jewellery software — particularly systems adapted from generic retail software or built without deep multi-branch experience — uses shared table architecture because it is simpler to build. The performance problems only appear at scale, which means by the time the issue surfaces, the business is already dependent on the system and migration is difficult.
Building branch-isolated tables from the start requires more architectural thought and slightly more complex query logic for consolidated reports. But the operational benefits at scale are not marginal — they are the difference between a network that works reliably and one that becomes progressively harder to manage as it grows.
This architecture is built into Jwellex from the ground up. It is why Jwellex multi-branch networks remain fast and reliable as they scale — and why the system has never experienced the branch-contention problems that plague shared-table architectures.