Vous avez économisé des centaines d'heures de processus manuels lors de la prévision de l'audience d'un jeu à l'aide du moteur de flux de données automatisé de Domo.
Database Triggers: Types, Examples, Best Practices

Modern data systems demand automation, integrity, and seamless reaction to changes. In relational databases—where rules matter as much as relationships—triggers have emerged as a powerful mechanism for driving logic automatically at the moment data changes. Trigger code can enforce policies, audit activity, maintain calculated states, and even orchestrate complex downstream behavior. But the power of triggers is matched by the risk of hidden side effects and performance surprises.
In this deep dive, we’ll explore what database triggers are, how they work, the various types you’ll encounter, concrete examples with real SQL, and sensible best practices for when (and when not) to use them.
What is a database trigger?
At its core, a trigger is a special kind of stored procedure that runs automatically in response to specific events on a table or view. Unlike normal procedures that run when explicitly invoked, triggers fire implicitly when the database detects certain operations—for example, an insert, update, or delete.
Think of triggers as event-driven listeners inside your database. Instead of a developer or application having to remember to run logic after updating a balance or logging audit data, the trigger “listens” for the action and runs the logic for you.
Triggers can be used for:
- enforcing business rules that cross multiple tables
- maintaining denormalized or aggregated data
- capturing audit history of changes
- rejecting or modifying data changes
- synchronizing tables
Unlike application code, triggers execute as part of the database transaction. That means they see the full context of the change and can participate in rollbacks—a powerful feature for maintaining data correctness.
But with that power comes responsibility: poorly written triggers can hide logic, degrade performance, cause cascading side effects, and make debugging difficult. The rest of this article will help you understand both how to use triggers effectively and how to avoid common pitfalls.
How triggers fit in the database execution model
To appreciate triggers, it’s helpful to understand where they sit in the flow of SQL execution.
When a database receives a data modification—say, an INSERT—it performs a series of internal steps:
- Parse and validate the SQL
- Lock and update the base data
- Fire before triggers (if defined)
- Apply the update
- Fire after triggers
- Commit or rollback the transaction
This sequencing means that triggers can either intervene before the change is committed (e.g., validate or modify the data) or react after the change (e.g., log it or update other tables). Because triggers run within the same transaction context as the originating DML (data modification language) statement, they can participate in rollbacks if needed.
This transactional nature makes them ideal for enforcing internal integrity rules that need to be guaranteed regardless of application behavior.
Types of triggers
Triggers are typically categorized by when they fire (timing), what event causes them to fire, and sometimes by scope (per row vs per statement). While implementations vary slightly across database engines (PostgreSQL, MySQL, SQL Server, Oracle), the fundamental concepts are similar.
Timing: BEFORE vs. AFTER
- BEFORE triggers execute before the actual data change. They are often used to validate or modify the incoming data before it’s written.
- AFTER triggers execute after the data change. These are useful for logging, auditing, cascading updates, or enforcing rules that depend on the final state.
Some databases also support INSTEAD OF triggers, especially on views, which allow you to intercept the action and replace it entirely with custom logic.
Event: INSERT, UPDATE, DELETE
Triggers fire in response to specific DML events:
- INSERT — when a new row is inserted
- UPDATE — when an existing row is modified
- DELETE — when a row is removed
In many systems, you can define one trigger that reacts to multiple events for the same table.
Granularity: Row vs. statement
- Row-level triggers fire once per affected row. For example, an update affecting 10 rows causes the trigger to run 10 times.
- Statement-level triggers fire once per DML statement, regardless of how many rows are affected.
This distinction can have major performance implications. Most business logic works at the row level, but auditing and logging may be better suited to statement-level triggers.
Real trigger examples
To ground this discussion, we’ll look at real SQL examples in common engines. Although SQL syntax differs a bit by dialect, the concepts are universal.
Automatic timestamps
One of the most common use cases is adding or maintaining timestamp columns like created_at and updated_at.
Suppose we have a users table:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name VARCHAR(255),
email VARCHAR(255),
created_at TIMESTAMP,
updated_at TIMESTAMP
);
We want:
created_atto be set once when the row is first insertedupdated_atto be refreshed every time the row changes
Here’s a trigger for PostgreSQL:
CREATE OR REPLACE FUNCTION set_timestamps()
RETURNS TRIGGER AS $$
BEGIN
IF TG_OP = 'INSERT' THEN
NEW.created_at := NOW();
NEW.updated_at := NOW();
ELSIF TG_OP = 'UPDATE' THEN
NEW.updated_at := NOW();
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_set_timestamps
BEFORE INSERT OR UPDATE ON users
FOR EACH ROW
EXECUTE FUNCTION set_timestamps();
This BEFORE trigger ensures timestamps are always populated correctly, no matter which application client inserted or updated the data.
Auditing changes
Triggers can build a detailed audit history. Suppose we have an orders table and want to preserve its change history.
CREATE TABLE orders (
id INT PRIMARY KEY,
customer_id INT,
amount DECIMAL(10,2),
status VARCHAR(50),
updated_at TIMESTAMP DEFAULT NOW()
);
We create an audit table:
CREATE TABLE orders_audit (
order_id INT,
old_status VARCHAR(50),
new_status VARCHAR(50),
changed_at TIMESTAMP,
changed_by TEXT
);
Here’s an AFTER UPDATE trigger that captures status changes:
CREATE OR REPLACE FUNCTION audit_order_status()
RETURNS TRIGGER AS $$
BEGIN
IF OLD.status IS DISTINCT FROM NEW.status THEN
INSERT INTO orders_audit (
order_id, old_status, new_status, changed_at, changed_by
) VALUES (
OLD.id, OLD.status, NEW.status, NOW(), SESSION_USER
);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_audit_order_status
AFTER UPDATE ON orders
FOR EACH ROW
EXECUTE FUNCTION audit_order_status();
Now, every time an order’s status changes, a row is recorded in orders_audit. This provides a reliable trail of changes for compliance or debugging.
Enforcing business rules
Triggers can enforce complex business constraints that aren’t easily captured with simple constraints.
Suppose we have a table inventory:
CREATE TABLE inventory (
product_id INT PRIMARY KEY,
quantity_on_hand INT,
reorder_level INT
);
We want to prevent any operation that sets quantity_on_hand below zero.
Here’s a trigger to enforce that:
CREATE OR REPLACE FUNCTION check_inventory_not_negative()
RETURNS TRIGGER AS $$
BEGIN
IF NEW.quantity_on_hand < 0 THEN
RAISE EXCEPTION 'Quantity on hand cannot be negative';
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_check_inventory
BEFORE INSERT OR UPDATE ON inventory
FOR EACH ROW
EXECUTE FUNCTION check_inventory_not_negative();
Now, any insert or update that would result in a negative quantity fails with an exception, and the whole transaction rolls back.
Synchronous cascading updates
Triggers can manage cascading updates and denormalized counters.
Imagine a blogging platform:
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
title VARCHAR(255),
body TEXT,
comment_count INT DEFAULT 0
);
CREATE TABLE comments (
id SERIAL PRIMARY KEY,
post_id INT REFERENCES posts(id),
comment TEXT
);
When a new comment is inserted, we want to update the parent post’s comment_count.
Here’s a simple trigger:
CREATE OR REPLACE FUNCTION increment_comment_count()
RETURNS TRIGGER AS $$
BEGIN
UPDATE posts
SET comment_count = comment_count + 1
WHERE id = NEW.post_id;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_increment_comment
AFTER INSERT ON comments
FOR EACH ROW
EXECUTE FUNCTION increment_comment_count();
This trigger keeps the comment_count in sync. But mass-inserting comments can cause contention and slow performance—something we’ll address in the best practices section.
When triggers shine (and when they don’t)
Triggers are a tool—and like all tools, they have contexts where they’re the right choice and others where they introduce subtle complexity.
Good use cases
Triggers shine for:
- Data integrity that must be enforced on the server—no exceptions
- Cross-table rules that cannot be expressed with declarative constraints
- Audit and history tracking that is transactionally consistent
- Defaults and derived fields that depend on complex logic
- Security logging for compliance use cases
In these scenarios, relying on triggers ensures that logic is enforced consistently, regardless of which application or client interacts with the database.
Cases to avoid triggers
Triggers can introduce hidden logic that surprises developers and applications. Avoid triggers when:
- The same logic can be handled in the application layer more transparently
- Performance matters at scale and triggers introduce row-by-row overhead
- Debugging and observability matter—triggers can make it harder to trace state changes
- Business rules are better expressed declaratively via constraints or schema design
For example, if you simply need to ensure a field is never null, a NOT NULL constraint is clearer than a trigger. Similarly, some aggregate or computed values are better handled through materialized views or periodic batch jobs.
Performance and pitfalls
A trigger’s convenience can mask performance traps.
Hidden execution cost
Triggers execute inside the transaction. A trigger that performs a heavy update or selects from large tables for every row will degrade performance quickly, especially in batch operations.
Example: A row-level trigger that hits another large table on every insert can turn a fast bulk insert into a slow, row-by-row affair.
Cascading effects
Triggers that modify other tables can themselves fire additional triggers. This can lead to cascading trigger execution that is hard to reason about.
For example, inserting a row may trigger an update on another table, which triggers yet another update. Deep chains like this can cause unexpected side effects and make debugging extremely difficult.
Ordering and determinism
When multiple triggers are defined on the same table and same event, execution order is not always guaranteed (this varies by database engine). If two triggers both update related state, the outcome can become nondeterministic unless the platform supports explicit ordering.
Visibility and debugging
In many teams, new developers and even DBAs are unaware of triggers unless they inspect the schema. A table that “mysteriously” updates other tables on insert can be confusing without clear documentation.
Best practices for using triggers
Here are principles to make triggers reliable and maintainable:
Keep triggers simple and focused
A trigger should do one thing well. Don’t combine audit logging, computed defaults, and cross-table updates in a single trigger. If logic grows complex, consider moving it to explicit stored procedures or application logic.
Document triggers thoroughly
Triggers are often “invisible” unless you look at schema definitions. Use comments, wiki pages, or inline documentation to explain:
- why the trigger exists
- what business rule it enforces
- which cases it covers and doesn’t cover
This clarity helps future maintainers avoid unintended side effects.
Favor statement-level over row-level when appropriate
If you only need to log that a change happened—and not every row—statement-level triggers reduce overhead. Conversely, row-level triggers should be used only when per-row context matters.
Be cautious with cross-table updates
Triggers that update other tables can serialize operations and cause locking contention. Whenever possible, consider:
- deferring complex updates to asynchronous jobs
- expressing logic as batch processes
For example, instead of updating a counter in a trigger for every row, you might accumulate increments and apply them in scheduled jobs.
Test triggers like production code
Treat triggers as first-class code:
- write automated tests covering edge cases
- simulate large batch operations
- ensure rollback paths behave as expected
This testing discipline prevents surprises when logic executes under load.
Monitor performance impacts
Because triggers execute with every relevant DML statement, it’s important to profile their cost. Tools like EXPLAIN and query plans can reveal trigger overhead, especially when combined with other workloads.
Trigger alternatives
Triggers are not the only way to enforce rules or derive values. Before committing to triggers, consider:
Declarative constraints
SQL constraints (CHECK, UNIQUE, foreign keys) are efficient and self-documenting. Whenever possible, express simple rules declaratively.
Application logic
For business logic that belongs to the domain layer, keeping logic in application code can improve clarity and observability.
Stored procedures
Explicit stored procedures still centralize logic at the database level, but avoid the implicit execution hidden in triggers.
Event-driven architectures
Modern systems use change data capture (CDC) and event streams to react to changes without database triggers. Platforms like Kafka, Debezium, and data integration tools allow asynchronous processing of change events with high scalability.
When and how to audit trigger logic
Given that triggers can be complex and deeply embedded, teams should consider formal audit processes.
Version control
Treat triggers as code. Store trigger definitions in version control alongside application schema migrations. Avoid ad-hoc creation directly on production.
Code review
Include triggers in standard code review processes.
Logging and observability
Implement structured logging or metrics inside triggers where possible. For example, logging how often a trigger fires and how long it takes can guide optimization.
Advanced trigger patterns
For teams building complex data systems, a few advanced patterns are worth knowing.
Soft deletes and archiving
Instead of physically deleting rows, some systems use triggers to move rows to archival tables.
CREATE OR REPLACE FUNCTION archive_on_delete()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO archived_orders
SELECT OLD.*;
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_archive_delete
BEFORE DELETE ON orders
FOR EACH ROW
EXECUTE FUNCTION archive_on_delete();
This pattern preserves history without cluttering the primary table.
Cross-schema synchronization
In multi-database or multi-schema systems, triggers can push changes to synchronizing tables or even external systems (via connector infrastructure).
This pattern is common in distributed systems, though it should be implemented cautiously to avoid coupling and latency issues.
Conditional triggers
You don’t always need to run logic on every change. Conditional logic inside triggers can reduce unnecessary work:
IF (OLD.status IS DISTINCT FROM NEW.status) THEN
-- business logic here
END IF;
This small check can dramatically cut unnecessary overhead.
Closing thoughts
Database triggers are a powerful feature that, when used thoughtfully, help enforce integrity, automate repetitive work, and centralize critical business logic inside the data layer. Their implicit nature increases consistency but also raises the bar on discipline, testing, and documentation.
Before implementing triggers, ask whether the logic could be solved with constraints, application logic, or scheduled jobs. When you do choose triggers, keep them focused, well-documented, and performance-aware.
And as data systems continue evolving—with real-time pipelines, event streaming, and cloud native databases—triggers will remain one of many tools in your data architect toolkit. Use them wisely and they’ll save you countless bugs; misuse them and they’ll create maintenance headaches.
Take your database insights further with Domo
Whether you’re enforcing data quality, tracking changes across systems, or driving real-time dashboards from transactional systems, Domo gives you the visibility and automation your data teams need. With native connectors, real-time data pipelines, and integrated governance tools, Domo helps you operationalize data without wrestling with brittle, hard-to-monitor triggers alone.
Ready to unlock the full value of your data? Learn how Domo empowers analytics and operational workflows across your organization.




