Firehose is an event delivery mechanism introduced as a premium feature for high-volume HasOffers clients, allowing push delivery of tracking and adjustment events to external event consumers. This feature requires you to have an Amazon Web Services (AWS) account as well as have or are a developer available to implement consuming messages from either Amazon's Simple Queue Service (SQS) or Amazon's Kinesis Stream.

This document covers setting your event receiver up, structure of the overall message you receive, structure of the event data packaged in each message, types of events, handling event deduplication. It includes references to the full list of Firehose fields and a basic example consumer.

If you are interested in adding Firehose to your HasOffers account, contact your account manager.

Using SQS

Setting Your SQS Queue Up

To receive a message from TUNE's AWS account to yours, you must give TUNE's account privileges on your SQS queue.

  1. In your AWS console, click on the Permissions tab, then add an additional permission.
  2. Set the Principal to: arn:aws:iam::875314598127:user/ho-firehose-prod
  3. From the Actions dropdown list, check SendMessages. Check only SendMessages, as we don't need to perform any other actions on your queue.
  4. Supply your Queue URL for TUNE.

For documentation from Amazon on setting your queue up, read:

Once you've completed the above steps, supply the queue URL to TUNE so we can deliver messages to you. For more information from Amazon on queue URLs, read:

Message Structure

Firehose message bodies are JSON encapsulations of tracking and adjustment events. Each queue message is a JSON object containing various control fields and a list of of the events as individual JSON objects. (The message itself is also known as an "envelope".)

The message body is gzipped then base64-encoded before delivery. Your consumer must decode it or pass the encoded body to Amazon's boto library, then unzip the message to get the resulting JSON. The envelope takes this form:

"data":OBJECT }
  • dispatch_id: ID of the message envelope, such as "de305d54-75b4-431b-adb2-eb6b9e546014". Do not use this for event deduplication. See Handling Message Deduplication below.
  • dispatch_timestamp: time the event batch was created, such as "14395106601".
  • is_test: an internal control field. Ignore this field.
  • network_id: your network ID.
  • target_class: always "stats". Future versions may extend this field to other values.
  • dispatch_class: always "event". Future versions may extend this field to other values.
  • data: JSON object containing list of one or more tracking or adjustment events, as detailed in the following section.

Event Structure

Each message's data field contains a list of one or more event objects. Each event object contains two identifying fields—event_id and action—and a number of additional fields pertaining to that event. The data field in the envelope takes this form:

  • event_id: unique ID for that event, such as "de305d54-75b4-431b-adb2-eb6b9e546014". Use this for deduplication.
  • action: type of action for that event: "impression", "click", or "conversion".

For all potential fields, see our Firehose fields document. Note that there are two tabs at the bottom of the page. The first tab at the bottom covering tracking events (impressions, clicks, and conversions). The second tab covers adjustment events.

Click here for the Firehose fields document

Event Types

There are two types of events: standard events (a.k.a. tracking events) and adjustment events.

Standard Events

Standard events cover the raw tracking events from our ad servers: impressions, clicks, and conversions. These come from your raw, unadjusted stats. You can see these events in the application through the event tracer and stat reports.

JSON objects for standard events contain all applicable fields for the event. Certain fields, if they do not apply specifically to that event will be excluded. See Event Structure above for a link to all potential fields.

Adjustment Events

Adjustment events cover the difference applied to your stats, such as rejecting conversion.

JSON objects for adjustment events contain fields relating to the adjustments amounts along with other information that affect stats aggregation where applicable such as affiliate sub IDs and advertiser sub IDs. See Event Structure above for a link to all potential fields.

For example, if you reject a conversion that has a payout of $10 and revenue of $20, the adjustment event data includes the following fields:


Adjustment events for impressions and clicks are handled in a similar fashion, with the action set to "impression" or "click".

Determining Event Type

To determine if an event is a standard event or adjustment event, check for the presence of an adjust_action field. If so, the event is an adjustment. If not, the event is standard.

From there, use the action field to determine if the event is for an impression, click, or conversion.

Disregard the adjust_action Adjustment Field

The adjust_action field covers the internal action in HasOffers, based on various conditions and database record interactions. For your purposes, it's only useful for determining if the event type is adjustment or standard.

Handling Event Deduplication

Each tracking event has an alphanumeric identifier, event_id. Use this value for deduplication purposes. Do not use the dispatch_id in the message envelope for deduplication.

The value of event_id is guaranteed to be unique only within a given network. If developing code to work with multiple networks, you must validate against network_id in conjunction with event_id.

Basic Example Consumer

You need a mechanism to consume messages once they start flowing into your SQS queue. See our example message consumer, written in Python and linked to below, for a basic sense of structure.

Our example consumer reads one event at a time from a message, and passes that event to a method called write. If you use this example to create a prototype, place the code for processing events in the write method contained within. The code you place is dependent on your system and storage mechanisms, which are outside the purview of HasOffers.

Disclaimer: This code is meant only as an example. It may not be the most expedient way to consume events from the queue depending on your architecture.

Click here for the sample SQS consumer code

Using Kinesis

Kinesis Streams is Amazon’s streaming data service. It can be used to capture and analyze terabytes of data from many sources. More information is available at

Setting Your Kinesis Stream Up

First, you need to provide your Kinesis Stream Name and AWS region to your HasOffers account manager or sales engineer, so Firehose is allowed to put records into your Kinesis stream. You also need to request the External ID from us for the next step.

Once we have that information and you have the External ID, you can set Firehose up on your end. Create an IAM role for our AWS account. Here is a document from AWS on how to setup a cross account IAM policy. Your rule will differ slightly from the document as you will be granting a specific managed policy.

The AWS account_id you need to grant access to is: 875314598127. Our specific production user ARN is: arn:aws:iam::875314598127:user/ho-firehose-prod. Include the External ID provided by us.

You will need to grant the following permissions in this IAM role.


Actions Resource Purpose
DescribeStream Amazon Kinesis stream Before attempting to write records, the producer should check if the stream exists and is active
PutRecord, PutRecords Amazon Kinesis stream Write records to Streams

When these steps are completed, the roles permissions should look something like:

  "Version": "2016-10-17",
  "Statement": [
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": ""
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": "abc-123"

Message Structure

Each message delivered via Kinesis is a single, direct event, rather than the event/envelope structure used by our SQS sender. This means the messages from Kinesis take on the structure of an event, but include the network's ID (as network_id) as part of the data sent.

Deduplication is still done via the event_id field, as with events delivered via SQS. Events are passed as unnamed JSON objects, as show the below examples:

Example Click Message


Example Conversion Message

Have a Question? Please contact for technical support.