Salesforce Record Automation Benchmarking

With all the Salesforce Flows enhancements over the last few releases, I’ve been wondering what the relative performance is among the various record automation options like Flows, Process Builders, Workflows, and Apex Triggers. Same-record field updates was an area of particular interest after reading this from Salesforce:

Of the ~150 billion actions that were executed by Workflow, Process Builder, and Flow in April this year — “actions” being things effected outside the runtime, such as record updates, email alerts, outbound messages, invocable actions — we believe that around 100 billion of those actions were same-record field updates.

Architect’s Guide to Building Record-Triggered Automation on Salesforce Using Clicks and Code

Roughly 2 of every 3 declarative actions are doing same-record field updates!

In this post, same-record field update benchmarking and before delete benchmarking will be covered. The methodology, results, and conclusions from each are covered.

Disclaimer: These results are from a personal developer edition org running on the Winter 21 release. These timings will vary according to your edition, system load at the time, and other factors. Treat these as relative performance timings and not absolute timings.

Want to run the benchmark? See my Salesforce Benchmark Package.

Same-Record Field Update Bechmarking

Since same-record field updates are the most common action, let’s explore the 5 options available today and see what the relative performance is among them.

Solution’s Timing Averages After 10 Runs

Apex TriggerBefore-Save FlowWorkflowAfter-Save FlowProcess Builder
52.8 ms62.7 ms87.1 ms354.1 ms396.2 ms

Methodology

For each solution, a custom object was created with an Auto Number name field and with a custom text field. 200 records are inserted in each solution’s custom object using apex from Execute Anonymous using VS Code and each solution sets the custom text field to the same value. Immediately before and after the insert statement Limits.getCPUTime is invoked. The start time in milliseconds is subtracted from the end time to determine the overall insertion time. These results are then recorded in a Benchmark Result custom object.

System.debug was avoided because turning on system.debug usually causes everything to be slower. Dan Appleman and Robert Watson shared that in their Dark Art of CPU Benchmarking presentation. Parsing each debug log after each benchmark run is also tedious.

The timings shown include the general overhead of inserting 200 records. As a result, the timings shouldn’t be interpreted as each operation takes X time. Each one is X plus Y overhead.

Before-Save Flow

A before save flow was created without any criteria and has one assignment that sets the custom text field to a default value.

10 Run Timings

Run #Time in Milliseconds
155
258
359
471
570
659
770
858
969
1058

Average: 62.7 milliseconds

Process Builder

A process builder was created on record insert that has no criteria and updates the custom text field to a default value.

10 Run Timings

Run #Time in Milliseconds
1370
2
443
3374
4431
5452
6413
7379
8400
9361
10339

Average: 396.2 milliseconds

Apex Trigger

An apex trigger with the Before Insert event is created that sets the custom text field to the default value for each inserted record.

10 Run Timings

Run #Time in Milliseconds
150
259
346
444
566
655
747
847
954
1060

Average: 52.8 milliseconds

After-Save Flow

Since after-save flows can also be used, I was curious about those too. This was done by creating an after-save flow on after-insert that has a single update record that updates the custom text field to the default value.

10 Run Timings

Run #Time in Milliseconds
1351
2360
3365
4338
5325
6453
7306
8377
9306
10360

Average: 354.1 milliseconds

Workflow

An on insert workflow rule was created that has a field update to set the custom text field to the default value.

10 Run Timings

Run #Time in Milliseconds
186
277
374
492
5101
6115
777
881
980
1088

Average: 87.1 milliseconds

Same-Record Field Update Conclusions

Before-Save Flows and Apex Triggers are the fastest solutions and should be used first. Which one you should use depends on your team makeup and their skill set. If you mostly have declarative expertise, use before-save flows. If mostly Apex expertise, use Apex triggers. If a mixture, go with before-save flows. Of course, if you have case where a declarative solution isn’t available, use an Apex Trigger. For example, custom validation that a validation rule can’t satisfy, an apex trigger is necessary because before-save flows don’t support them currently.

While workflow rules also run almost as fast, they should be avoided because they’re no longer being enhanced and Salesforce recommends using Flows and Apex triggers as the no-code and “pro” code record automation options, which I agree with. Salesforce record automation recommendations.

After-Save Flows and Process Builders require an additional record update to update the same record which causes the order of execution to begin again which is why it’s so slow.

The relative timings performed as I expected with Apex triggers being the fastest followed by before-save flows, workflows, after-save flows, and finally process builders. The declarative relative timings coincide with Dan Appleman’s declarative benchmarking in The Return of The Dark Art of Benchmarking – This Time It’s Declarative! despite the timings not being the same, which is expected.

Before Delete Benchmarking

In Winter 21, one can now use flows to run automation when a record is deleted. This was one of my most desired features for almost 10 years. Common deletion use cases are:

  • Continuing the cascade delete for the deleted record’s child records that have a lookup to it.
  • Creating a history / audit record for it in another custom object.
  • Send a notification to a person or group when certain records are deleted.

Solution’s Timing Averages After 10 Runs

Before Delete Apex TriggerBefore Delete Flow
94.7 milliseconds152.3 milliseconds

Methodology

The history / audit record use case is simulated. Whenever a record is deleted, an audit record in a separate object is created to record that event.

For each solution, a custom object was created to hold the records. 200 records are inserted in each solution’s custom object using apex from Execute Anonymous using VS Code and then those 200 records are deleted. Each solution creates 200 audit records in the same “Audit” custom object. Immediately before and after the delete statement Limits.getCPUTime is invoked. The start time in milliseconds is subtracted from the end time to determine the overall insertion time. These results are then recorded in a Benchmark Result custom object.

System.debug was avoided because turning on system.debug usually causes everything to be slower. Dan Appleman and Robert Watson shared that in their Dark Art of CPU Benchmarking presentation. Parsing each debug log after each benchmark run is also tedious.

The timings shown include the general overhead of deleting 200 records. As a result, the timings shouldn’t be interpreted as each operation takes X time. Each one is X plus Y overhead.

Before Delete Flow

This before delete flow has a create records element to create the audit record.

10 Run Timings

Run #Time in Milliseconds
1138
2151
3182
4137
5136
6149
7157
8153
9165
10155

Average: 152.3 milliseconds

Before Delete Apex Trigger

An Apex trigger with only the Before Delete event was created on the object which creates an audit record for each deleted record and then inserts them all at the end.

10 Run Timings

Run #Time in Milliseconds
1113
2105
388
481
593
692
7101
888
994
1092

Average: 94.7 milliseconds

Before Delete Conclusions

Either option runs really fast. 150 ms vs 94 is not that much different in absolute terms. If possible, use the before delete option first and the Apex trigger option second. This certainly depends on you and your team’s skill set as mentioned in the field update conclusions above.

Next Steps & Further Research

  • Create another benchmarking blog post comparing one before save flow with a single decision but one assignment for each outcome depending on the criteria met VS having multiple before save flows but each one has one criteria and does one assignment. I’ve actually already done this but didn’t want this post to be too long. What do you think the results were? Let me know in the comments below.
  • Open source the benchmark and code so others can run them and extend it.

Resources

  • Dark Art of CPU Benchmarking – Dan Appleman and Robert Watson presented their methodology, findings, and results in this 2016 presentation. While the timings are obsolete, their guidance and insights remain. I had the pleasure of working with Robert at NimbleUser and he’s a great guy. I attended one of Dan’s Dreamforce 2011 or 2012 sessions and that was also great. I read his original Advanced Apex book but have to read the latest 4th edition where he discusses benchmarking further.

Let me know in the comments below if this was helpful and what you think.

12 thoughts on “Salesforce Record Automation Benchmarking”

  1. Great post, Luke. Very interesting to see the relative differences between the options. Time to chuck process builders wherever possible!

  2. Luke Hi,
    Thank you for a great article!

    As for your question – since Salesforce cannot guarantee the flow order of execution, it’s a bit hard to guess which on fires first.
    If the the flow (from the multiple flows) that matched the criteria ran first, I’m guessing that it will be faster than the single one. Otherwise, I think the single flow wins, but I’m not sure about that.

    So, who wins? I’m curious 🙂

    1. Gidi,

      The single flow actually wins in the very specific scenario I benchmarked which may be different from what you’re thinking. Stay tuned for an upcoming blog post detailing that. It’ll likely be a while though because I want to share out the code first but we’ll see.

  3. Very informative.
    It’s nice to know that Flow doesn’t add a lot of overhead compared to Apex.
    Presumably most of the time is database transaction (to/from storage) rather than data processing (in CPU)?

    1. Frank,

      It’s hard to say since my timing isn’t that granular. However, Salesforce has since updated the debug logs to specify how long each element / node takes in a Flow so it may be possible to get more granular timings. Generally speaking, anything that requires going over the network or connecting to the database are “slow” operations.

Comments are closed.