2. Queue subsystem

The default behaviour of an MTA is to spool messages that are being received to disk, which are then picked up and delivered by the queuing subsystem. In contrast to “in-line” deliver() to the next hop during reception, this allows the MTA to accept messages even if the next hop is unavailable, or if a message has multiple recipients on different next hop destinations.

2.1. Activity diagram

This figure illustrates the typical states and movements inside the queue. It will be descibed in the following sections.

Queue activity diagram

2.2. Queue states

The queuing system can be seen as having three overall messages states:


Messages that are on permanent hold. This is the most simple, and typically least used, state. Messages that are on hold are not even recognised by the queue process; they are simply ignored (until moved into the active queue). It’s not really a queue; rather a collection of messages. It can be used to, for example, implement a spam/virus quarantine. Messages can be put in the hold queue directly during the reception of a message, using the end-of-DATA script queue() function’s “hold” option.


Messages that are not on hold, and are scheduled for delivery in the future. The default behaviour is for every message that fail delivery (with a non-permanent error) to be put in the defer queue; scheduled to be re-tried sometime in the future. It’s a queue in the sense that messages are ordered by their scheduled delivery time.


Messages that are not on hold, and are scheduled for delivery now (or in the past). Messages that are put in queue() for delivery end up directly in the active queue (unless the “delay” option is used). Messages are also moved automatically from the defer queue into the active queue on their scheduled time of delivery. The active queue is the by far most advanced part of the queuing subsystem, and contains many sub-states. This stems from the fact that concurrency and rate can be limited based on many dynamic parameters, essentially creating a virtually unlimited numbers of sub-queues. This will be described in great detail later.

2.2.1. Modifying the queue

As described in the previous section, messages are automatically moved between the active and defer queue in accordance with the normal behaviour of an MTA (try-defer-retry-defer etc). Messages can also be forcibly moved between queues by modifying them. For example, changing the status from enum HOLD to DELIVER will “release” the message from the hold queue. Changing the next_retry timestamp from a point in the future to now (or sometime in the past) will move the mesage from the defer queue to the active queue. You can update email metadata using the integrated package’s REST API, web administration, or by querying the data store directly. This changes both the information on disk and in memory (the queue process’s data structures). When a message is forcibly moved, it’s in-memory representation is discarded, and it’s re-injected via the “update” actor described in the Activity diagram.

2.2.2. Active queue states

When a message enters the active queue, it goes though five stages.

Pre-delivery script

The pre-delivery script is executed (if it exists). The script allows the administrator to implement per-attempt logic, such as dynamic routing.


All necessary domain name information is resolved. If that succeeds, the message put in the active queue. If not, it skips directly to the post-delivery script.

Active queue

The message remains in the active queue until allowed by the active queue’s pickup policies (that control concurrency, rate, etc). The pickup policies are what define the virtually unlimited number of sub-queues, and will descibed later in great detail.


A delivery attempt (SMTP or LMTP) is made.

Post-delivery script

Regardless of the outcome of delivery attempt, the post-delivery script is executed (if it exists). It can be used by the administrator to override the default logic or implement logging.

2.3. Active queue pickup

The active queue pickup subsystem determine when a message should be picked up, usually based on concurrency or rate limitations on properties such as local IP or destination, or suspensions.

By splitting the system’s available concurrency, virtual sub-queues can be created. This is useful for separating email of different classes, so that one class of email that’s stuck or moving slowly doesn’t don’t block others. Those sub-queues can be created based on any properties; pre-defined ones like recipient domain, or custom fields that can be populated from script. For example, let’s consider the case where the system’s total concurrency is set to 20 000, and the system have two local IP addresses that can be used as source IPs when sending email. With a pickup policy limiting the concurrency to 10 000 connections per local IP, we can be certain that even if traffic from one of the IPs jam up, traffic from the other IP will be unaffected. We can extend this concept to for example customers in a multi-tenant system (making sure that abuse from one customer doesn’t jam the queue for others) or recipient domains/MXs (so that one slow destination doesn’t jam the queue for email to other destinations). Finally, we can create combination of those.

2.3.1. Queue pickup policies

Concurrency and rate limits are counted against what we call counters, which can have one or multiple fields that define a unique entry. In order to offer a very high degree of flexibility, the counters are defined by the administrator. The available fields are transportid, localip, remoteip, remotemx, recipientdomain and jobid. The localip field is a list, so that an email can be queued with multiple alternatives for source IP. All other fields have exactly one value.

Grouping based on wild-card or regular expression matching is available for “rolling up” queued email with different values into the same entry. Thresholds for concurrency and rate is added based on conditions, with a default fall-back. When the threshold is exceeded for an entry, it gets added to the active queue’s pickup suspension list. It prevents email matching those properties from being picked up and delivered. Once the threshold is no longer exceeded, the entry is removed from the suspension list.

The quick pickup configuration is in YAML format on disk, validated using JSON schemas (included in our Visual Studio Code plugin). In addition to the configuration file on disk, policy conditions can be added on the fly over the Protocol Buffer API’s PolicyConditionAddRequest function, the integrated package’s REST API and web administration, as well as from the pre- and post-delivery script. Policy counter thresholds

Thresholds for concurrency limits the number email in the delivery state. The rate threshold limits the numbers of email X passing though the delivery state over a given time interval Y, specified in seconds, as X/Y. If the interval is omitted, 1 second is assumed.

The very simplistic example from above (with two local IPs) can be described using the following YAML pickup policy configuration:

- fields:
  - localip
    concurrency: 10000

Each time an email is picked up from the active queue, the “localip” concurrency counter entry with that email’s source IP is incremented. When the delivery attempt is done, the same counter entry is decremented. If 10 000 email for the same source IP is being delivered at the same time, the default threshold will be exceeded, and the suspension list will be populated with an entry saying that any email with that source IP should not be picked up. Policy conditions

Different thresholds can be set depending on the field values using conditions. Conditions are evaluated first-to-last, with the first matching threshold winning. Consequently, if a more general conditions is placed above a more specific one, the latter might never match (because the former always wins). The example below limits the concurrency based on a combination of source IP and destination domain, with an override for the domain “halon.io”:

- fields:
  - localip
  - recipientdomain
  - if:
      recipientdomain: halon.io
      concurrency: 2
    concurrency: 5

The above policy will exceed if two email are being delivered to the recipient domain “halon.io” from the same source IP. Policy counter groups

Counters can be aggregated based on wild-card or regular expression matching, so that different field values count against the same entry. Groups are given IDs, and conditions are matched against the grouped entry by prefixing with “#”. The example below have two counters, with multiple fields per counter. One limits both rate and concurrency based on destination MX (with rollup for Google G-suite) in combination with source IP. The other also limits the concurrency per source IP, but destination IP instead of MX, and only enforces a threshold for email to recipient domains with a Microsoft Outlook MX.

- fields:
  - localip
  - remotemx:
        - '*.google.com'
        - '*.googlemail.com'
        - '*.smtp.goog'
  - if:
      remotemx: '#gsuite'
      concurrency: 10
      rate: 50
    concurrency: 5
    rate: 10
- fields:
  - localip
  - remotemx:
        - '*.protection.outlook.com'
  - remoteip
  - if:
      remotemx: '#o365'
      concurrency: 10
      rate: 30

2.3.2. Queue pickup suspension

As described in the previous section, the pickup policy subsystem implements a suspension list to enforce concurrency and rate limits. This suspension list can be used directly to temporarily pause and resume traffic in the active queue. This shouldn’t be used for permanent suspension or archiving, as it occupies in-memory space. For more permanent suspension, like a quarantine for spam, use the hold queue.

The quick pickup configuration is in YAML format on disk, validated using JSON schemas (included in our Visual Studio Code plugin). In addition to the configuration file on disk, policy conditions can be added on the fly over the Protocol Buffer API’s PolicyConditionAddRequest function, the integrated package’s REST API and web administration, as well as from the pre- and post-delivery script.

The example below suspends all email on the “customer1” transport to the destination “gmail.com”:

- transportid: customer1
  recipientdomain: gmail.com Excluding IPs from pool

Queued email can have multiple designated source IPs, which can be used to load balance between a pool of source IPs. The list of source IPs (and matching HELO hostnames) can be configured per transport, or overridden by the “sourceip” argument to the pre-delivery script’s Try() function. Queuing email with multiple source IPs has the benefit of being able dynamically suspend specific source IPs, while still allowing queued email to be sent using the other IPs. Consider the following example:

- localip:
  recipientdomain: gmail.com

Since localip is a list, email queued with both “” and another IP will still be sent to “gmail.com” from that other source IP.