Legacy Event Processor
- Table of Contents
- Introduction
- Configuration
- Configuration Examples
- Event Processor Logs
- Starting and Stopping the Event Processors
Introduction
The legacy event processor only delivers events that it has been configured to log. If you haven’t configured event logging yet, then we recommend using the Event Notification System page as your starting point. It will direct you back to this page after the prerequisites are met.
Configuration
Definition
The event processor is configured in the JSON file found at /var/hvmail/control/event_processor.json
.
Below is the definition for this configuration JSON document. The root of the
document should be an object that defines the top-level keys like
event_destinations
.
You must define a place for events to be delivered: (a) At least one
destination must be defined in event_destinations
or (b) a logfile for
writing events to must be defined in logfile
.filename
.
If the event_destinations
list contains at least one destination, the last
one must match all events (matches
must be set to { "all": true }
).
event_destinations
array of hashes An array of event destination hashes. The first event destination to match the incoming event has the event delivered to it. Subsequent event destinations are not used. Each entry in the
|
|||||||||||||||||||||||||||||||||||||||||||||||||
logfile
hash /optional Set the keys in this hash to enable the optional File Delivery Method for events.
|
|||||||||||||||||||||||||||||||||||||||||||||||||
http_keep_alive
integer /optional |
Enable HTTP Keep-Alive for HTTP endpoints. Set this value to the
maximum number of connections. This will cause the event processor to
re-use the same connection for multiple events, increasing throughput.
Set to |
||||||||||||||||||||||||||||||||||||||||||||||||
http_user_agent
string /optional |
The User-Agent header to send to HTTP endpoints. When not specified,
a default value of |
||||||||||||||||||||||||||||||||||||||||||||||||
configuration_mode
string /optional |
Set this key to |
||||||||||||||||||||||||||||||||||||||||||||||||
concurrency
integer /optional |
Set this to the number of concurrent event processors that should execute simultaneously. Use this to increase the throughput of the event processor. By default, concurrency is set to 1 for a single event processor. If this value is set to 0, no event processors will run. |
||||||||||||||||||||||||||||||||||||||||||||||||
query_limit
integer /optional |
Set the maximum number of events that can be delivered by a single execution of an event processor. By default, there is no limit on the number of events. If this value is set to 0, no limit is used. Unless you have been instructed by GreenArrow technical support to use this option, please do not use it. (This may be needed in situations where a large backlog of events has accumulated. Attempts to process them all in a single run can degrade performance, or cause significant pressure on available memory.) |
||||||||||||||||||||||||||||||||||||||||||||||||
db_conn_cache_size
integer /optional |
The maximum number of database connections that can be active
simultaneously. This setting is only used for event destinations of
type This setting also only applies when different If a new database connection is required and the maximum has already been reached, the least recently used connection is closed. |
||||||||||||||||||||||||||||||||||||||||||||||||
db_conn_cache_max_idle
integer /optional |
The maximum length of time, in seconds, that a database connection is
allowed to be idle. After this length of time, if no further events
have been delivered to it, it is closed. The default value is |
||||||||||||||||||||||||||||||||||||||||||||||||
commit_batch_size
integer /optional |
The maximum number of events that the event processor may attempt to deliver between commits to its database. Values of This setting comes with a performance/reliability tradeoff.
Regardless of this setting’s value, the event processor will wait a maximum of 1 second between commits. |
||||||||||||||||||||||||||||||||||||||||||||||||
use_json_for_http_post
boolean /optional, default: false |
This determines the format of how the event is encoded for HTTP
endpoints. If If you have a combination of endpoints that expect form fields and
JSON-encoded data, you can set the global value here and
set Regardless of this setting’s value, the endpoint must return a 2xx
success HTTP status code, like When this setting is When this setting is |
Verification
To verify your configuration file, run the following command.
# hvmail_event_processor --check-syntax
No errors found in configuration.
This will verify that your configuration appears to be valid for running. This will not tell you that any successful connection was actually made, nor that your connection settings are correct.
Here’s an example of a syntax check on a configuration file that did not
declare its event_destinations
array.
# hvmail_event_processor --check-syntax
There was a problem with the configuration file /var/hvmail/control/event_processor.json:
configuration must define an 'event_destinations' array
You may also run the event processor in a mode that processes the events only for a single email address. This provides a good way of testing your configuration without bringing the event processor up, leaving all other events in your queue.
hvmail_event_processor --process-by-email "[email protected]"
Reloading
The configuration file is automatically reloaded every 10 seconds. If an error is found in the configuration during a reload, events are not delivered.
Configuration Examples
HTTP Post Example
Here’s an example configuration that posts all events to an HTTP URL:
{
"configuration_mode": "json",
"event_destinations": [
{
"matches": { "all": true },
"destination": {
"type": "http_post",
"url": "http://example.com/event_receiver?source=ga"
}
}
]
}
MySQL Example
Here’s an example configuration that sends all events, including all columns that are present by default as of 2020-04-08 to a MySQL database:
{
"configuration_mode": "json",
"event_destinations": [
{
"matches": { "all": true },
"destination": {
"type": "custom_sql",
"db_dsn": "DBI:mysql:database=greenarrow;host=127.0.0.1",
"db_username": "greenarrow",
"db_password": "secretpassword",
"sql": "INSERT IGNORE INTO events ( id, event_type, event_time, email, listid, list_name, list_label, sendid, bounce_type, bounce_code, bounce_text, click_url, click_tracking_id, studio_rl_seq, studio_rl_recipid, studio_campaign_id, studio_autoresponder_id, studio_is_unique, studio_mailing_list_id, studio_subscriber_id, studio_ip, studio_rl_seq_id, studio_rl_distinct_id, engine_ip, user_agent, json_before, json_after, timestamp, channel, status, is_retry, msguid, sender, mtaid, injected_time, message, outmtaid, outmtaid_ip, outmtaid_hostname, sendsliceid, throttleid, mx_hostname, mx_ip, synchronous, is_privacy_open ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )",
"bind_list": [ "id", "event_type", "event_time", "email", "listid", "list_name", "list_label", "sendid", "bounce_type", "bounce_code", "bounce_text", "click_url", "click_tracking_id", "studio_rl_seq", "studio_rl_recipid", "studio_campaign_id", "studio_autoresponder_id", "studio_is_unique", "studio_mailing_list_id", "studio_subscriber_id", "studio_ip", "studio_rl_seq_id", "studio_rl_distinct_id", "engine_ip", "user_agent", "json_before", "json_after", "timestamp", "channel", "status", "is_retry", "msguid", "sender", "mtaid", "injected_time", "message", "outmtaid", "outmtaid_ip", "outmtaid_hostname", "sendsliceid", "throttleid", "mx_hostname", "mx_ip", "synchronous", "is_privacy_open" ]
}
}
]
}
PostgreSQL Example
Here’s an example configuration that sends all events, including all columns that are present by default as of 2020-04-08 to a PostgreSQL database:
{
"configuration_mode": "json",
"event_destinations": [
{
"matches": { "all": true },
"destination": {
"type": "custom_sql",
"db_dsn": "dbi:Pg:dbname=greenarrow;host=127.0.0.1",
"db_username": "greenarrow",
"db_password": "secretpassword",
"sql": "INSERT INTO events ( id, event_type, event_time, email, listid, list_name, list_label, sendid, bounce_type, bounce_code, bounce_text, click_url, click_tracking_id, studio_rl_seq, studio_rl_recipid, studio_campaign_id, studio_autoresponder_id, studio_is_unique, studio_mailing_list_id, studio_subscriber_id, studio_ip, studio_rl_seq_id, studio_rl_distinct_id, engine_ip, user_agent, json_before, json_after, timestamp, channel, status, is_retry, msguid, sender, mtaid, injected_time, message, outmtaid, outmtaid_ip, outmtaid_hostname, sendsliceid, throttleid, mx_hostname, mx_ip, synchronous, is_privacy_open ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? ) ON CONFLICT DO NOTHING",
"bind_list": [ "id", "event_type", "event_time", "email", "listid", "list_name", "list_label", "sendid", "bounce_type", "bounce_code", "bounce_text", "click_url", "click_tracking_id", "studio_rl_seq", "studio_rl_recipid", "studio_campaign_id", "studio_autoresponder_id", "studio_is_unique", "studio_mailing_list_id", "studio_subscriber_id", "studio_ip", "studio_rl_seq_id", "studio_rl_distinct_id", "engine_ip", "user_agent", "json_before", "json_after", "timestamp", "channel", "status", "is_retry", "msguid", "sender", "mtaid", "injected_time", "message", "outmtaid", "outmtaid_ip", "outmtaid_hostname", "sendsliceid", "throttleid", "mx_hostname", "mx_ip", "synchronous", "is_privacy_open" ]
}
}
]
}
Microsoft SQL Server Example
Here’s an example configuration that sends all events, including all columns that are present by default as of 2020-04-08 to a Microsoft SQL Server database:
{
"configuration_mode": "json",
"event_destinations": [
{
"matches": { "all": true },
"destination": {
"type": "custom_sql",
"db_dsn": "dbi:ODBC:DRIVER={ms-sql};Server=10.0.0.1;port=1433;database=greenarrow",
"db_username": "greenarrow",
"db_password": "secretpassword",
"sql": "INSERT INTO events ( id, event_type, event_time, email, listid, list_name, list_label, sendid, bounce_type, bounce_code, bounce_text, click_url, click_tracking_id, studio_rl_seq, studio_rl_recipid, studio_campaign_id, studio_autoresponder_id, studio_is_unique, studio_mailing_list_id, studio_subscriber_id, studio_ip, studio_rl_seq_id, studio_rl_distinct_id, engine_ip, user_agent, json_before, json_after, timestamp, channel, status, is_retry, msguid, sender, mtaid, injected_time, message, outmtaid, outmtaid_ip, outmtaid_hostname, sendsliceid, throttleid, mx_hostname, mx_ip, synchronous, is_privacy_open ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )",
"bind_list": [ "id", "event_type", "event_time", "email", "listid", "list_name", "list_label", "sendid", "bounce_type", "bounce_code", "bounce_text", "click_url", "click_tracking_id", "studio_rl_seq", "studio_rl_recipid", "studio_campaign_id", "studio_autoresponder_id", "studio_is_unique", "studio_mailing_list_id", "studio_subscriber_id", "studio_ip", "studio_rl_seq_id", "studio_rl_distinct_id", "engine_ip", "user_agent", "json_before", "json_after", "timestamp", "channel", "status", "is_retry", "msguid", "sender", "mtaid", "injected_time", "message", "outmtaid", "outmtaid_ip", "outmtaid_hostname", "sendsliceid", "throttleid", "mx_hostname", "mx_ip", "synchronous", "is_privacy_open" ]
}
}
]
}
HTTP Post and PostgreSQL Example
Here’s an example configuration that sends the id
, event_type
and event_time
values for studio_open
events to a database table, and everything else to an HTTP URL:
{
"configuration_mode": "json",
"event_destinations": [
{
"matches": { "event_type": [ "studio_open" ] },
"destination": {
"type": "custom_sql",
"db_dsn": "dbi:Pg:dbname=greenarrow;host=127.0.0.1",
"db_username": "greenarrow",
"db_password": "secretpassword",
"sql": "INSERT INTO events ( id, event_type, time_int ) VALUES ( ?, ?, ? )",
"bind_list": [ "id", "event_type", "event_time" ]
}
},
{
"matches": { "all": true },
"destination": {
"type": "http_post",
"url": "http://example.com/event_receiver?source=ga"
}
}
]
}
Logfile Example
Here’s an example configuration that writes events to the /var/log/greenarrow-events.log
logfile:
{
"logfile": {
"filename": "/var/log/greenarrow-events.log",
"filename_append_date": false
}
}
Do Nothing Example
This is the default configuration, which leaves all events in queue:
{
"event_destinations": [
{
"matches": {
"all": true
},
"destination": {
"type": "leave_in_queue"
}
}
]
}
Event Processor Logs
The event processor logs are kept in /var/hvmail/log/event-processor
and /var/hvmail/log/event-processor2
.
Use these commands to diagnose why an event is not being delivered.
For a streaming view of the log as it happens:
tail -F /var/hvmail/log/event-processor*/current | tai64nlocal
To see a particular time range of events:
logdir_select_time --start "2015-11-24 19:00" --end "2015-11-25 00:00" --dir /var/hvmail/log/event-processor | tai64nlocal
Starting and Stopping the Event Processors
To check the running state of the two event processor services:
hvmail_init status | grep hvmail-event-processor
To start the event processor:
svc -u /service/hvmail-event-processor*
To stop the event processor:
svc -d /service/hvmail-event-processor*