Script 1123: AdGroup CPA Outlier

Purpose

The script identifies and tags AdGroups with abnormally high Cost Per Acquisition (CPA) performance within a campaign using a 30-day lookback period, excluding the most recent day.

To Elaborate

The Python script is designed to detect and tag AdGroups within advertising campaigns that exhibit unusually high Cost Per Acquisition (CPA) performance. It does this by analyzing data over a 30-day period, excluding the most recent day to account for conversion lag. The script calculates various performance metrics such as CPA, Return on Ad Spend (ROAS), conversion rate, and average cost per click. It then identifies anomalies in CPA performance using the Interquartile Range (IQR) method, which helps in detecting outliers by setting a threshold for rare events. The script flags AdGroups as outliers if their CPA is significantly higher than the campaign average, provided they have a higher spend than the campaign median. This helps in identifying underperforming AdGroups that may require optimization or further investigation.

Walking Through the Code

  1. Data Preparation
    • The script begins by defining a lookback period of 30 days, excluding the most recent day to account for conversion lag.
    • It filters the input data to include only the necessary columns and aggregates performance metrics such as publication cost, conversions, revenue, and clicks by AdGroup within each campaign.
    • Rows without cost or conversions are removed to ensure meaningful analysis.
  2. Anomaly Detection Functions
    • The script defines functions to detect anomalies using the IQR method. The get_feature_anomalies function identifies outliers based on a specified threshold, while is_anomaly_iqr calculates the IQR and determines upper and lower bounds for detecting anomalies.
    • The find_peer_anomaly function applies these anomaly detection methods to each campaign, identifying AdGroups with CPA significantly higher than the campaign average.
  3. Identifying CPA Anomalies
    • For each campaign, the script calculates the median CPA and identifies AdGroups with CPA above this median as potential outliers.
    • It uses the find_peer_anomaly function to detect these outliers and tags them with a descriptive message indicating their CPA is much higher than the campaign average.
  4. Output Preparation
    • The script compiles the identified anomalies into a DataFrame. If no anomalies are found, it prepares an empty output DataFrame with the necessary columns.
    • Finally, it outputs the tagged AdGroups, providing insights into which AdGroups may require further attention due to their high CPA performance.

Vitals

  • Script ID : 1123
  • Client ID / Customer ID: 1306913045 / 60268001
  • Action Type: Bulk Upload
  • Item Changed: AdGroup
  • Output Columns: Account, Campaign, Group, AUTOMATION - Outlier
  • Linked Datasource: M1 Report
  • Reference Datasource: None
  • Owner: dwaidhas@marinsoftware.com (dwaidhas@marinsoftware.com)
  • Created by dwaidhas@marinsoftware.com on 2024-05-22 16:09
  • Last Updated by dwaidhas@marinsoftware.com on 2024-05-22 16:26
> See it in Action

Python Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
#
# Tag AdGroup if CPA performance is abnormally high within Campaign
#
#
# Author: Dana Waidhas 
# Date: 2024-05-22

RPT_COL_GROUP = 'Group'
RPT_COL_DATE = 'Date'
RPT_COL_ACCOUNT = 'Account'
RPT_COL_CAMPAIGN = 'Campaign'
RPT_COL_CAMPAIGN_ID = 'Campaign ID'
RPT_COL_GROUP_ID = 'Group ID'
RPT_COL_PUB_COST = 'Pub. Cost $'
RPT_COL_COST_PER_CONV = 'Cost/Conv. $'
RPT_COL_ROAS = 'ROAS'
RPT_COL_CONV_RATE = 'Conv. Rate %'
RPT_COL_AVG_CPC = 'Avg. CPC $'
RPT_COL_CLICKS = 'Clicks'
RPT_COL_CONV = 'Conv.'
RPT_COL_REVENUE = 'Revenue $'
RPT_COL_IMPR = 'Impr.'
BULK_COL_ACCOUNT = 'Account'
BULK_COL_CAMPAIGN = 'Campaign'
BULK_COL_AUTOMATION_OUTLIER = 'AUTOMATION - Outlier'

outputDf[BULK_COL_AUTOMATION_OUTLIER] = numpy.nan

################## Configurable Param ##################

# IQR 1.5 = looks for rare events having less than 3% of occuring; lower includes more events
ANOMALY_IQR_THRESHOLD = 0.7
LOOKBACK_DAYS = 30
CONVERSION_LAG_DAYS = 1

########################################################



## Data Prep

print(inputDf[RPT_COL_DATE].min(), inputDf[RPT_COL_DATE].max())

# 30-day lookback without most recent CONVERSION_LAG_DAYS days due to conversion lag
start_date = pd.to_datetime(datetime.date.today() - datetime.timedelta(days=CONVERSION_LAG_DAYS+LOOKBACK_DAYS))
end_date = pd.to_datetime(datetime.date.today() - datetime.timedelta(days=CONVERSION_LAG_DAYS))

df_reduced = inputDf[ (inputDf[RPT_COL_DATE] >= start_date) & (inputDf[RPT_COL_DATE] <= end_date) ]

if (df_reduced.shape[0] > 0):
    print("reduced dates\\n", min(df_reduced[RPT_COL_DATE]), max(df_reduced[RPT_COL_DATE]))
else:
    print("no more input to process")

# reduce to needed columns
df_reduced = df_reduced[[RPT_COL_ACCOUNT, RPT_COL_CAMPAIGN, RPT_COL_GROUP, RPT_COL_DATE, RPT_COL_PUB_COST, RPT_COL_CONV, RPT_COL_REVENUE, RPT_COL_CLICKS]].copy()

# specify the columns to sum
cols_to_sum = [RPT_COL_PUB_COST, RPT_COL_CONV, RPT_COL_REVENUE, RPT_COL_CLICKS]

# apply sum operation only to the specified columns
df_group_perf = df_reduced.groupby([RPT_COL_ACCOUNT, RPT_COL_CAMPAIGN, RPT_COL_GROUP])[cols_to_sum].sum()


# remove rows without cost or conversions
df_group_perf = df_group_perf[(df_group_perf[RPT_COL_CONV] > 0) & (df_group_perf[RPT_COL_PUB_COST] > 0)]

# index by campaign
df_group_perf = df_group_perf.reset_index().set_index([RPT_COL_ACCOUNT, RPT_COL_CAMPAIGN]).sort_index()

# calculate features
df_group_perf[RPT_COL_COST_PER_CONV] = (df_group_perf[RPT_COL_PUB_COST] / df_group_perf[RPT_COL_CONV])
df_group_perf[RPT_COL_ROAS] = df_group_perf[RPT_COL_REVENUE] / df_group_perf[RPT_COL_PUB_COST]
df_group_perf[RPT_COL_CONV_RATE] = df_group_perf[RPT_COL_CONV] / df_group_perf[RPT_COL_CLICKS]
df_group_perf[RPT_COL_AVG_CPC] = (df_group_perf[RPT_COL_PUB_COST] / df_group_perf[RPT_COL_CLICKS])

## Define Anomaly Fuctions

# Finds anomalies using a certain function (e.g. sigma rule, iqr etc.)
# data: DataFrame
#     Dataset with features
# func: func
#     Function to use to find anomalies
# features: list
#     Feature list
# thresh: int
#     Threshold value (e.g. 2/3 * sigma, 2/3 * iqr)
# Returns: tuple
def get_feature_anomalies(data, func, features=None, thresh=1.5):

    if features:
        features_to_check = features
    else:
        features_to_check = data.columns 
        
    outliers_over = pd.Series(data=[False] * data.shape[0], index=data[features_to_check].index, name='is_outlier')
    outliers_under = pd.Series(data=[False] * data.shape[0], index=data[features_to_check].index, name='is_outlier')

    anomalies_summary = {}
    for feature in features_to_check:
        anomalies_mask_over, anomalies_mask_under, upper_bound, lower_bound = func(data, feature, thresh=thresh)
        anomalies_mask_combined = pd.concat([anomalies_mask_over, anomalies_mask_under], axis=1).any(axis=1)
        anomalies_summary[feature] = [upper_bound, lower_bound, sum(anomalies_mask_combined), 100*sum(anomalies_mask_combined)/len(anomalies_mask_combined)]
        outliers_over[anomalies_mask_over[anomalies_mask_over].index] = True
        outliers_under[anomalies_mask_under[anomalies_mask_under].index] = True
        
#         print("anomalies_mask_combined: ", anomalies_mask_combined)
#         print("Outliers: ", outliers)
        
    anomalies_summary = pd.DataFrame(anomalies_summary).T
    anomalies_summary.columns=['upper_bound', 'lower_bound', 'anomalies_count', 'anomalies_percentage']
    
    anomalies_ration = round(anomalies_summary['anomalies_percentage'].sum(), 2)
#     print(f'Total Outliers Ration: {anomalies_ration} %')
    
    return anomalies_summary, outliers_over, outliers_under

# Finds outliers/anomalies using iqr 
# data: DataFrame
# col: str
# thresh: int
#     Number of IQR to apply 
# Returns: Series 
#     Boolean Series Mask of outliers 
def is_anomaly_iqr(data, col, thresh):

    IQR = data[col].quantile(0.75) - data[col].quantile(0.25)
    upper_bound = data[col].quantile(0.75) + (thresh * IQR)
    lower_bound = data[col].quantile(0.25) - (thresh * IQR)
#     print("IQR calc: ", col, IQR, upper_bound, lower_bound)
#     anomalies_mask = pd.concat([data[col] > upper_bound, data[col] < lower_bound], axis=1).any(axis=1)
    anomalies_mask_over = data[col] > upper_bound
    anomalies_mask_under = data[col] < lower_bound
#     print("Anomalies mask: ", (anomalies_mask_over, anomalies_mask_under))
    
    return anomalies_mask_over, anomalies_mask_under, upper_bound, lower_bound

def find_peer_anomaly(df_slice, features, iqr_threshold=1.5, outliers_desired=(True, True)):
    
    (want_outliers_over, want_outliers_under) = outliers_desired
   
    if (df_slice.shape[0] < 3):
        return
    
    idx = df_slice.index.unique()
    
    df_slice.reset_index(inplace=True)
    
    anomalies_summary_iqr, outlier_over_iqr, outlier_under_iqr = get_feature_anomalies( \
                df_slice, \
                func=is_anomaly_iqr, \
                features=features, \
                thresh=iqr_threshold)
    
    median_cost = df_slice[RPT_COL_PUB_COST].median()
    
#     print(f"over: {outlier_over_iqr}")
#     print("under: {outlier_under_iqr}")
    
    # include over/under outliers as desired
    is_outlier_iqr = np.logical_or(
                        np.logical_and(want_outliers_over, outlier_over_iqr),
                        np.logical_and(want_outliers_under, outlier_under_iqr)
    )
    
#     print("is_outlier\\n", is_outlier_iqr)
    
    # ignore anomaly from low spend adgroups (greater than campaign median)
    is_outlier_iqr = np.logical_and(is_outlier_iqr, df_slice[RPT_COL_PUB_COST] > median_cost)
    
    if sum(is_outlier_iqr) > 0:
        print(">>> ANOMALY", idx)
        print(anomalies_summary_iqr)
        cols = [RPT_COL_GROUP, RPT_COL_PUB_COST, RPT_COL_CONV, RPT_COL_REVENUE] + features
        print(df_slice.loc[is_outlier_iqr, cols])
        
    return is_outlier_iqr

## Find CPA Anomalies

print("df_group_perf shape:", df_group_perf.shape)
print("df_group_perf", tableize(df_group_perf.head()))
df_anomalies = pd.DataFrame()

# annotate via Marin Dimensions
def rowFunc(row):
    return 'CPA ${:,.2f} is much higher than campaign avg ${:,.2f}'.format(
        row[RPT_COL_COST_PER_CONV], \
        row[RPT_COL_COST_PER_CONV + '_median']
    )

for campaign_idx in df_group_perf.index.unique():
    df_campaign = df_group_perf.loc[[campaign_idx]].copy()
    df_campaign[RPT_COL_COST_PER_CONV + '_median'] = df_campaign[RPT_COL_COST_PER_CONV].mean()

    df_campaign[BULK_COL_AUTOMATION_OUTLIER] = np.nan
    outliers = find_peer_anomaly(df_campaign, [RPT_COL_COST_PER_CONV], iqr_threshold=ANOMALY_IQR_THRESHOLD, outliers_desired=(True,False))

    if outliers is not None and sum(outliers) > 0:
        df_outliers = df_campaign.loc[outliers].copy()
        df_outliers[BULK_COL_AUTOMATION_OUTLIER] = df_outliers.apply(rowFunc, axis=1)
        print(df_outliers)
        df_anomalies = pd.concat([df_anomalies, df_outliers], axis=0)

## Prepare Output

if df_anomalies.empty:
    outputDf = pd.DataFrame(columns=[RPT_COL_ACCOUNT, RPT_COL_CAMPAIGN, RPT_COL_GROUP, BULK_COL_AUTOMATION_OUTLIER])
    print("No anomalies found")
else:
    print("anomaly examples", tableize(df_anomalies.head()))
    outputDf = df_anomalies[[RPT_COL_ACCOUNT, RPT_COL_CAMPAIGN, RPT_COL_GROUP, BULK_COL_AUTOMATION_OUTLIER]]
    print("output size", outputDf.shape)
    print("output examples", tableize(outputDf.head()))


Post generated on 2024-11-27 06:58:46 GMT

comments powered by Disqus