Skip to main content

Feature Output Assignment

As part of the Causal stack, you’ll run an impression server which handles all the low-latency communication between your servers and Causal. This impression server provides the tooling necessary to determine the correct Feature outputs to show to a user, run experiments and collect the necessary data to evaluate the experiments. The impression server uses the feature definition defined in your FDL, as well as the configuration specified in the Causal Web Tools to determine the feature outputs to render for the user. These feature outputs are determined based on the state of the feature and any active experiments running on the feature.

When a Causal client requests a feature from the impression server, it proceeds through six phases to determine the correct feature outputs to render:

  1. Validation
  2. Plugins Loaded
  3. QA Overrides
  4. Audience Membership Calculated
  5. Experiment Evaluation
  6. Rollouts Determined

Validation

During the initial feature evaluation phase, we evaluate the arguments passed in when requesting the feature, and make sure all required arguments are present (if a required argument is missing, we’ll return an error to client). From there, if the feature arguments match a prior impression for that session, we’ll return the same feature outputs (see Memoization).

Plugins Loaded

Next, we’ll construct the context for the feature by running any applicable plugins specified in the FDL. The arguments to the feature, along with any data returned from a plugin constitute the feature context. The feature context is used to determine audience membership, and whether the user is eligible for the feature.

QA Overrides

After that, any QA overrides set in the Causal UI for the current user are evaluated and applied to the feature outputs. QA overrides will always take precendece over any other output logic. At this point, any feature-attributes that have been set by the QA overrides will be returned to the client. Subsequent phases will only be applied to feature-attributes that have not been set by the QA overrides.

Audience Membership Calculated

Once we have the feature context defined, we then calculate Audience membership based on the feature context. Audience membership in Causal is session based, and only applies to events that have happened within the session, not prior sessions.

After updating the user’s audience membership for the session, we check if they are still eligible for the feature. A feature may be limited to only a certain audience. If the user is ineligible for the feature based on their calculated audience, we’ll return nothing. If they are eligible for the feature, we’ll proceed to the experiment evaluation stage.

Experiment Evaluation

First, are any experiments active for the feature? If not, then we render the feature. Otherwise, we proceed with evaluating the experiment and assigning the user to a variant.

When experiments are active, we loop through each experiment and evaluate whether the user should have the feature outputs modified by any active experiments. Experiments are evaluated in datetime order ascending. If two experiments modify the same feature-attribute, the experiment launched first will win.

We first check if the user has already been assigned a variant for the experiment. If the user has an assignment, we return the feature-attribute values set in the experiment. All other feature-attributes set in the feature-context during the Feature Evaluation phase will be left unchanged.

For users not previously assigned to a variant in the experiment, we evaluate the user’s audiences and check if they are eligible for the experiment. If yes, we proceed. If not, then we return the feature. Then we calculate the user’s variant assignment by hashing the split_key and the experiment id and assign the user to that variant.

Then we update the feature outputs with the outputs set by the experiment variant.

Rollout Phase

Once all experiments that include the feature have been processed, we execute any active rollouts (rollouts that are ramping to a % of traffic over a time period). Only users that were not assigned to an experiment variant in the prior step will be considered for the rollout. For any feature-attribute values that have an active rollout, we will split the users based on the split_key and assign some to the new value and some to the old value.

info

Running an experiment while a rollout ramp is active can make your experiment metrics a little more noisy. The experiment stats will collapse all control users into a single group. However, during the rollout ramp, some users would have received the “old” control value as well as the “new” control value. The statistics will still be valid, and you can evaluate your test relative to the control.