A client needed live reporting dashboards over a WordPress site running MemberPress with several custom fields. The off-the-shelf MemberPress API could not return the custom-field data, and the WordPress meta-table layout made direct queries slow and brittle. The fix was a custom Node.js API plus a separate PostgreSQL database, built so that the reporting layer is fully isolated from the WordPress production database.
This post describes the engineering. It is also a worked example of the pattern I use today inside larger pipelines: keep your reporting/analytics surface separate from the system of record.
The constraint
The business needed a single dashboard view across membership data: members per state, members worldwide, types of members, growth trends. The data lived in WordPress, and WordPress stored most of it in the wp_usermeta table.
The wp_usermeta table is a key-value store. For one user, you might see something like:
| id | key | value |
|---|---|---|
| 56 | role | member |
| 56 | occupation | engineer |
| 56 | customfields | a:6:{s:7:"general";s:2:"on";s:10:"engineering" ...} |
The third row is the problem. MemberPress (and many other WordPress plugins) writes its custom-fields blob as a PHP-serialised array in a single value field. You cannot index inside it. You cannot query it cleanly. You can deserialise it in PHP at read time, but doing so on every report request is expensive.
The MemberPress REST API also did not return the custom-field structure as columns. So the business could not build a dashboard against the API alone.
I am not going to argue with the WordPress engineers' design choice - the key/value approach gives plugins a stable extension point. But it is not a query surface, and it is not a reporting surface. Building a dashboard over it requires a layer.
The shape of the answer
Three components:
- A custom Node.js API server I built for the client. It owns the reporting queries.
- A read-only path back into WordPress via the WordPress REST API. The reporting layer never writes to WordPress.
- A separate PostgreSQL database that holds the de-serialised, column-shaped reporting data, partitioned for fast lookups.
Operationally:
- WordPress + MemberPress + MySQL stays exactly as it was. No plugin mods. No schema mods.
- The Node.js service reads through the REST API, joins what it needs in memory, deserialises the custom-fields blob into proper columns, and writes the result into PostgreSQL.
- Dashboards query Postgres, not WordPress. The MySQL production database is not hit by reporting traffic.
If WordPress goes down, the reporting dashboards stay up (against the last warehouse refresh). If the warehouse is removed, WordPress is unchanged. The two systems cannot corrupt each other.
Technical detail (the join + deserialise step)
The interesting part is the in-memory join. The Node.js service constructs a temporary table that holds, per user, the columns the dashboard needs:
- Identity columns from
wp_users - Standard meta from
wp_usermeta(role, occupation, etc.) - Membership columns from the MemberPress tables
- Post columns where relevant
- Per-custom-field columns generated by deserialising the PHP-serialised blob and using each key as a column name and each value as the row value
For a serialised blob like a:3:{s:4:"city";s:8:"Cape Town";s:5:"sized";s:6:"medium";s:6:"member";s:3:"yes";}, the join produces three real columns (city, sized, member) on the row for that user.
I added two synthetic id columns per row to enforce uniqueness on the warehouse side without depending on a schema rule, so re-imports stay idempotent.
Once the temp table is built in memory, the Node.js service writes it to a PostgreSQL table with columns for every field the dashboard needs. Now the dashboard can query indexed columns (SELECT count(*) FROM members WHERE city = 'Cape Town' GROUP BY sized), which is fast and predictable.
Why PostgreSQL specifically
Two properties I wanted that MySQL was not the right tool for:
- Native partitioning. I split the warehouse by year and by month. A dashboard widget asking for "members joined in March 2024" hits a single partition, not the whole table. As the data grows, this is the difference between "fast forever" and "fast for the first year."
- Read isolation. The reporting workload (heavy aggregate queries) does not touch the production MySQL that WordPress depends on. WordPress page loads stay snappy regardless of what the dashboards are doing.
What this is in the pattern catalogue
I keep this post live because it is one of the cleanest worked examples of a pattern that shows up across most of my work:
- Source system stays a source system. No additional load, no schema changes, no plugin internals touched.
- A second database, with a schema you control, holds the question-answering structure.
- A custom service mediates between them, on a read-only path.
This is the same engineering shape you will find in the WordPress-to-business-systems API integration work and in the third-party API integration work I document elsewhere on this site. It is also the shape that sits underneath the SEO AI Toolbox case study - different problem class, same engineering discipline.
When to choose this over a WordPress plugin
A custom plugin would have been faster to build for the simple case. I did not pick that path because:
- The reporting needs were going to grow. Plugins die when their data needs outgrow what fits in the WordPress database without slowing it down.
- The client did not want to be tied to a single WordPress install. If they ever migrate the front end, the reporting warehouse stays.
- The dashboard team needed a stable, read-only surface they could trust. Plugins have a habit of losing access during WordPress core upgrades.
The Node + Postgres path costs more to set up. It pays back the day the business needs to do anything serious with the data.
What I would change in 2026
The 2023 build is still in production. If I rebuilt it today I would:
- Use the WordPress REST API behind a typed TypeScript client instead of raw fetch
- Push the in-memory join into a queue-driven worker so the warehouse refresh runs without holding the API thread
- Add a light read-replica in front of Postgres for very high-traffic dashboard scenarios
None of those are corrections to the original design. They are scale upgrades for when the dashboard traffic grows.
If you have a WordPress install whose API does not give you what you need
Three engagement shapes I see most often:
- Reporting layer. This post. WordPress as source, custom service + Postgres as the analytics surface.
- Bidirectional integration. WordPress as one of several systems. Custom service synchronises specific fields between WordPress and a CRM, ERP, or accounting system on a defined cadence.
- Headless surface. WordPress as a content/admin source, with the public site rendered by Next.js (the kind of build that scores 99–100 on PageSpeed and SEO; see the Next.js stack post for the underlying decision).
Decided in the consultation. Get in touch if your WordPress install needs more than a plugin can give it. If it is part of a larger AI/automation engagement, see /services/ai-app-development for the broader scope.