Engineering

Carbon credit approval workflows in SuiteCRM: a technical guide.

How to architect a methodology-gated project approval pipeline in SuiteCRM — intake, reviewer routing, offset volume computation, and an audit trail that holds up in a verification review.

Apr 23, 202613 min read
Carbon credit approval workflows in SuiteCRM: a technical guide.

TL;DR — the architecture in four moves:

  • Model each methodology family (forestry, cookstoves, renewables, agriculture, waste) as its own SuiteCRM module — first-class schemas, not conditional fields on a generic Project record.
  • Share a single approval state machine across modules — intake, routing, review, developer response, approval gate, credited — with methodology-specific transitions.
  • Wire SuiteCRM ↔ ClickUp bi-directionally: the CRM is authoritative on methodology-regulated fields (status, reviewer, offset volume); ClickUp owns PM-side fields (due date, task notes).
  • Compute offset volumes from methodology formulas at the approval gate; write every change to an append-only audit trail with actor, reason, and before/after state.

Carbon credit registries run on a deceptively simple loop: a project developer submits, a reviewer evaluates against a methodology, an approval decision is recorded, credits are issued. Everyone familiar with the voluntary carbon market knows it is never that clean in practice.

Methodologies differ sharply — forestry projects ask different questions of a reviewer than cookstove projects or grid-renewable projects. Reviewer pools are specialised. Project-by-project context has to survive years of back-and-forth. And the audit trail is not optional — it is the product.

This post is the architecture we implemented for a nonprofit operating one of the world's largest voluntary carbon standards. It is a technical walkthrough — what we modelled as modules, where the automations fire, how we integrated with ClickUp, and how the system stayed auditable end-to-end. If you are running or building a registry, or evaluating what a CRM has to look like to carry ESG-grade approval workflows, this is the blueprint.

Building a registry or ESG operations platform? We specialise in methodology-driven approval workflows on self-hosted SuiteCRM. Book a 30-minute architecture review and we'll tell you where the friction is.

Why a generic project-management tool isn't enough

The registry's initial stack was reasonable on paper: ClickUp for internal project management, email for reviewer handoffs, Google Sheets for offset volume computation. Each tool did its job. The problem was that a single project spanned all three, and no tool owned the project.

The practical failures looked like:

  • A reviewer asking, “What was the methodology applied to this project again?” — because that field lived in a ClickUp custom field they didn't have access to.
  • A project manager reconstructing the approval history for a verification audit by scrolling through email threads six months old.
  • Offset volume numbers in the final report not matching the working spreadsheet, because the spreadsheet had been updated after the report was filed.

None of these are catastrophic in isolation. All of them, together, mean the registry is spending staff time as a human glue layer between systems. That's the cost we were hired to remove.

The design goal: CRM as system of record, PM tool stays

The first decision was strategic, not technical. Project managers liked ClickUp. Forcing them off it would have killed the rollout. So the design goal became:

SuiteCRM owns the project record. ClickUp runs the work. They stay in sync, and the CRM is authoritative.

This shapes everything downstream. The integration architecture is bi-directional sync, not data-migration. Reviewers and compliance staff live in the CRM. PMs live in ClickUp. Both see a coherent picture.

Modelling methodologies as first-class modules

A methodology in the voluntary carbon market is not just a tag — it is a protocol with required fields, validation rules, reviewer qualifications, and offset-volume formulas. Treating it as a field on a generic "Project" record flattens all that structure into guesswork.

Instead, we modelled each methodology family as its own SuiteCRM module:

  • mod_forestry_methodology — afforestation, reforestation, improved forest management
  • mod_cookstove_methodology — efficient-stove and clean-fuel adoption
  • mod_renewable_methodology — grid-connected renewables, off-grid deployments
  • mod_agriculture_methodology — soil carbon, rice management, livestock
  • mod_waste_methodology — landfill gas, wastewater treatment

Each module carried the fields the methodology actually demands. Forestry needs plot coordinates, species, stand age. Cookstoves need baseline fuel, adoption rate, leakage factors. Bundling them all into one generic schema would either miss fields or bloat the common record with irrelevant ones.

Why modules, not dropdown-driven records

The alternative — a single Project module with a methodology dropdown and a thousand conditional fields — is tempting. We've seen other registries try it. It fails slowly. Every new methodology forces a schema change on every existing record. Reporting becomes an exercise in filtering. Permissions per methodology become impossible because they all sit in the same table.

Modules cost more upfront. They pay off the first time you need to add a methodology without destabilising the four methodologies in production.

The approval state machine

Each methodology module shared a common approval state machine, but with methodology-specific transitions:

  1. Intake — developer submits, basic eligibility check runs (geography, methodology applicability, document completeness).
  2. Routing — the system assigns a reviewer based on methodology family and current reviewer load. Methodology-family expertise is a required match.
  3. Methodology review — reviewer works through methodology-specific validation. Additionality tests, baseline scenario checks, leakage assessment. Reviewer notes and requested clarifications are logged against the project record.
  4. Developer response — if clarifications were requested, the project returns to the developer. Clarification loops are capped to prevent indefinite cycling.
  5. Approval gate — final compliance review. Offset volume estimate is computed here from methodology-specific formulas and the validated project parameters.
  6. Credited — approval recorded, volumes published, project listed in the public registry.

Each transition wrote an audit log entry — actor, timestamp, reason, before/after state. No exceptions, no "system user" mystery entries.

Automated reviewer routing

Manual routing was the largest single source of delay in the old process. Project leads were routing by email because they knew, informally, who had capacity. That knowledge evaporated when a project lead was on leave.

We encoded the routing rules explicitly:

  • Methodology-family match — a reviewer must be qualified for the methodology family. Reviewer qualifications are stored as a related module.
  • Active load — the system tracks each reviewer's in-flight projects by methodology. Projects route to the least-loaded qualified reviewer.
  • Conflict of interest — if a reviewer has a prior engagement with the project developer, they are excluded.
  • Geography — some methodologies require regional familiarity; those are routed accordingly.

Routing happened via a SuiteCRM scheduled job plus workflow hooks. When a project entered Routing, the system picked the reviewer, assigned the record, and pushed a corresponding task into ClickUp for the reviewer's PM tracking. The reviewer saw a new card in ClickUp; the compliance team saw a routing decision logged in the CRM. Same event, two audiences, one audit trail.

The SuiteCRM ↔ ClickUp bridge

The integration is where most registry implementations get sloppy. The trap is to treat it as a one-way push — CRM writes, ClickUp reads — which works until a PM updates a due date in ClickUp and the CRM quietly drifts.

We built it bi-directionally, using the event-driven pattern we use across all integrations:

  • CRM → ClickUp. SuiteCRM workflow hooks fire on status transitions, creating or updating the corresponding ClickUp task via the ClickUp API. Task title, description, due date, assignee, custom fields — all synced.
  • ClickUp → CRM. ClickUp webhooks fire on task status changes. A self-hosted n8n flow normalises the payload and posts it to SuiteCRM's v8 REST API. The CRM record updates, and the update is tagged with source: clickup in the audit log.
  • Conflict rules. The CRM wins on any field that is methodology-regulated (status, reviewer assignment, offset volume). ClickUp wins on PM-side fields (due date, internal notes, attachments). Documented, not implicit.

The idempotency work is where most of the engineering time goes. Every sync message carries a correlation ID; duplicate webhooks are detected and dropped; out-of-order updates are reconciled against the last-known state. Without that, you get sync storms the first time a reviewer updates a task twice in rapid succession.

Auto-computed offset volumes

The calculation from methodology formulas to an estimated offset volume was previously done in spreadsheets, then typed into reports. We moved the formulas into the CRM.

Each methodology module carried:

  • The input fields the formula depends on (baseline, project scenario, leakage, uncertainty)
  • The formula itself, implemented in a methodology-specific computation service
  • A versioning field — when a methodology is revised, historical projects stay computed under the methodology version they were approved under

The formula ran at two points. First, at intake, as a sanity check — was the project in the right order of magnitude? Second, at the approval gate, as the authoritative estimate that fed into the registry listing.

Reports pulled from the CRM directly. No spreadsheet copy-paste. If the methodology version got revised, the report could regenerate against the revised formula (for projects not yet approved) or stay locked (for approved projects).

Per-methodology dashboards

Once the data was structured, the dashboards wrote themselves. Each methodology family got its own dashboard:

  • Projects in each approval stage, with age-in-stage
  • Reviewer load distribution across the family
  • Estimated credited volume in pipeline by stage
  • Average time-in-stage, trended over 12 months
  • Projects stalled in clarification loops beyond a threshold

For the registry's leadership, a cross-methodology dashboard aggregated the same metrics. For the first time, the registry could see whether the bottleneck was in intake, reviewer capacity, or developer response time — and target the fix accordingly.

The audit trail

ESG-grade audit trails are not "who logged in when." They are: for every material decision in a project's lifecycle, who made it, when, why, and what the before-and-after state was. Five years after a project is approved, the registry must be able to reconstruct the decision.

We implemented this as a cross-cutting audit module. Every write to a methodology record — via UI, via API, via workflow — triggered an audit event containing:

  • Actor (user or system)
  • Timestamp (UTC, immutable)
  • Changed field(s)
  • Before value
  • After value
  • Reason or triggering event
  • Correlation ID linking it to the triggering workflow or request

The audit store was append-only at the database level. Nothing could overwrite it. The audit UI let compliance staff reconstruct a project's full history at any point in time.

What this architecture costs you

This is not a lightweight implementation. Modelling methodologies as modules, building the approval state machine, wiring a bi-directional ClickUp bridge with conflict rules, implementing an append-only audit trail — each of those is weeks of work. For a registry, the question is not whether to build it but which order.

Our sequence, if we were starting over:

  1. One methodology, end-to-end. Pick the most active methodology. Build intake → routing → review → approval for that one. Prove the pattern.
  2. Audit trail early. Don't retrofit auditing. Every write from day one goes through the audit pipeline.
  3. Second methodology to validate the module pattern. The first methodology proves the workflow. The second proves the module abstraction holds.
  4. ClickUp bridge once the CRM side is stable. Integrations against a moving CRM schema are painful. Stabilise the CRM first.
  5. Offset volume computation last. The formula-per-methodology work is independent of the rest and easiest to add once the approval state machine exists.

Where SaaS carbon platforms fall short

There are carbon-specific SaaS platforms in the market. For small project developers, some of them work fine. For a registry — the organisation that operates the standard — they fall short in three specific ways:

  • Methodology modelling. SaaS platforms model methodologies as metadata, not as first-class schemas. You can't add methodology-specific fields, validations, or formulas without a product change request.
  • Audit integrity. SaaS audit logs are usually tied to the platform's own user model. If the platform goes down or the vendor exits the market, the audit trail goes with it. Registries need durable, exportable audit records they control.
  • Integration control. Registries integrate with verifier systems, national registries, public ledgers, internal PM tools. SaaS platforms expose a handful of webhooks and ask you to fit your operations around them. Self-hosted SuiteCRM flips that — your integrations, your control.

Final thought

A carbon credit registry is not a CRM use case in the sales-pipeline sense. It is a methodology-gated approval machine where every decision is a public claim. The system that runs it has to carry methodology structure, reviewer workflow, auditable history, and volume computation as first-class concepts — not as custom fields bolted onto a sales CRM.

SuiteCRM worked because we could model all of that explicitly. The open-source foundation meant the registry owns the schema, owns the data, owns the integration surface. The audit trail is theirs, not a vendor's. Five years from now, if they want to change the methodology structure, they can — without a product change request to a SaaS vendor.

That's the shape of ESG infrastructure worth building on.

Frequently asked questions

How is methodology-gated approval different from generic project approval?

A generic project approval treats every project the same — one form, one review path, one approver. Methodology-gated approval assigns each project a methodology at intake, and the methodology determines the required fields, reviewer qualifications, validation rules, and offset-volume formula. A forestry project and a cookstove project never share the same review path.

Why not use a SaaS carbon credit platform?

SaaS platforms model methodologies as metadata, not as first-class schemas — you cannot add methodology-specific validations or formulas without a vendor change request. Their audit trails are tied to the platform's user model; if the vendor exits or pivots, the audit record goes with them. For a registry that operates the standard, those constraints are material. Self-hosted SuiteCRM flips the control.

Can this pattern work for a small registry?

Yes — the pattern scales down cleanly. Start with one methodology family, prove the approval state machine end-to-end, then add methodologies as new modules. You do not need to build all methodology families on day one. The module-per-methodology approach lets you add methodologies later without schema changes to existing records.

What's the minimum methodology modelling you'd start with?

Per methodology: the input fields the formula depends on, the formula itself, a versioning field so historical projects stay computed under the methodology version they were approved under, and the approval-path customisations (reviewer qualifications, validation rules). Typically 15-25 fields per methodology family plus a short computation module — versus the hundreds of conditional fields a generic Project module would need.

How do you keep the audit trail durable across a platform change?

The audit log is an append-only table in the CRM's database, not a vendor-side service. Every write captures actor, timestamp, before/after state, and correlation ID. If the registry ever migrates off SuiteCRM, the audit records are a plain SQL export — portable, verifiable, and independent of the CRM platform.

Running a registry, an ESG operator, or a verifier? We've built this pattern for the voluntary carbon market and adapt it to adjacent ESG workflows. See the full use case or book a 30-minute architecture review.