Script 527: Ad Group Outlier
Purpose
Python script to tag ad groups if their CPA performance is abnormally high within a campaign.
To Elaborate
This Python script analyzes ad group performance data within a specified date range and identifies ad groups with abnormally high cost per acquisition (CPA) compared to the campaign average. It then tags these ad groups as outliers.
Key business rules:
- The script uses a 30-day lookback period, excluding the most recent 3 days due to conversion lag.
- Only ad groups with non-zero cost and conversions are considered.
- Outliers are determined using the interquartile range (IRQ) method, with a threshold of 2 IRQs.
- Ad groups with CPA higher than the campaign average are considered outliers.
- Outliers are tagged using the “AUTOMATION - Outlier” label.
Walking Through the Code
- The script starts by defining column constants and initializing the output dataframe.
- Data preparation:
- The script prints the minimum and maximum dates in the input data.
- A 30-day lookback period is defined, excluding the most recent 3 days.
- The input data is filtered to include only rows within the defined date range.
- Only the necessary columns are selected for further analysis.
- Metrics (cost, conversions, revenue, clicks) are summed across dates for each ad group.
- Rows without cost or conversions are removed.
- The dataframe is indexed by account and campaign.
- Additional features (cost per conversion, ROAS, conversion rate, average CPC) are calculated.
- Anomaly functions are defined:
get_feature_anomalies
: Finds anomalies using a specified function (IRQ in this case) for a given set of features.is_anomaly_irq
: Finds outliers/anomalies using the IRQ method for a specific column.
- The
find_peer_anomaly
function is defined to find outliers within a campaign:- The function takes a slice of the dataframe for a specific campaign and applies the
get_feature_anomalies
function to find outliers for the specified features (CPA in this case). - Outliers are filtered based on desired criteria (over outliers, under outliers).
- Anomalies are ignored for ad groups with low spend (greater than the campaign median).
- If outliers are found, they are printed along with relevant information.
- The function takes a slice of the dataframe for a specific campaign and applies the
- The script prints the shape of the input dataframe.
- Anomalies are identified for each campaign using the
find_peer_anomaly
function. - The identified outliers are stored in a separate dataframe.
- The output dataframe is prepared by selecting the necessary columns from the outliers dataframe.
- The script checks if all the required columns exist in the output dataframe and prints a message if any columns are missing.
- The output dataframe is assigned to the
outputDf
variable.
Vitals
- Script ID : 527
- Client ID / Customer ID: 1306925141 / 60269255
- Action Type: Bulk Upload
- Item Changed: AdGroup
- Output Columns: Account, Campaign, Group, AUTOMATION - Outlier
- Linked Datasource: M1 Report
- Reference Datasource: None
- Owner: ascott@marinsoftware.com (ascott@marinsoftware.com)
- Created by ascott@marinsoftware.com on 2023-11-13 20:59
- Last Updated by ascott@marinsoftware.com on 2023-12-06 04:01
> See it in Action
Python Code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
####
# Tag AdGroup if CPA performance is abnormally high within Campaign
#
#
# Author: Michael S. Huang
# Date: 2023-02-22
#
RPT_COL_DATE = 'Date'
RPT_COL_GROUP = 'Group'
RPT_COL_PUBLISHER = 'Publisher'
RPT_COL_ACCOUNT = 'Account'
RPT_COL_CAMPAIGN = 'Campaign'
RPT_COL_GROUP_ID = 'Group ID'
RPT_COL_PUB_COST = 'Pub. Cost $'
RPT_COL_COST_PER_CONV = 'Cost/Conv. $'
RPT_COL_ROAS = 'ROAS'
RPT_COL_CONV_RATE = 'Conv. Rate %'
RPT_COL_AVG_CPC = 'Avg. CPC $'
RPT_COL_IMPR = 'Impr.'
RPT_COL_CLICKS = 'Clicks'
RPT_COL_CONV = 'Conv.'
RPT_COL_REVENUE = 'Revenue $'
BULK_COL_ACCOUNT = 'Account'
BULK_COL_CAMPAIGN = 'Campaign'
BULK_COL_GROUP = 'Group'
BULK_COL_AUTOMATION_OUTLIER = 'AUTOMATION - Outlier'
outputDf[BULK_COL_AUTOMATION_OUTLIER] = numpy.nan
## Data Prep
print(inputDf[RPT_COL_DATE].min(), inputDf[RPT_COL_DATE].max())
# 30-day lookback without most recent 3 days due to conversion lag
start_date = pd.to_datetime(datetime.date.today() - datetime.timedelta(days=33))
end_date = pd.to_datetime(datetime.date.today() - datetime.timedelta(days=3))
df_reduced = inputDf[ (inputDf[RPT_COL_DATE] >= start_date) & (inputDf[RPT_COL_DATE] <= end_date) ]
if (df_reduced.shape[0] > 0):
print("reduced dates\\n", min(df_reduced[RPT_COL_DATE]), max(df_reduced[RPT_COL_DATE]))
else:
print("no more input to process")
# reduce to needed columns
df_reduced = df_reduced[[RPT_COL_ACCOUNT, RPT_COL_CAMPAIGN, RPT_COL_GROUP, RPT_COL_DATE, RPT_COL_PUB_COST, RPT_COL_CONV, RPT_COL_REVENUE, RPT_COL_CLICKS]].copy()
# sum metics across dates
# specify the columns to sum
cols_to_sum = [RPT_COL_PUB_COST, RPT_COL_CONV, RPT_COL_REVENUE, RPT_COL_CLICKS]
# apply sum operation only to the specified columns
df_group_perf = df_reduced.groupby([RPT_COL_ACCOUNT, RPT_COL_CAMPAIGN, RPT_COL_GROUP])[cols_to_sum].sum()
# remove rows without cost or conversions
df_group_perf = df_group_perf[(df_group_perf[RPT_COL_CONV] > 0) & (df_group_perf[RPT_COL_PUB_COST] > 0)]
# index by campaign
df_group_perf = df_group_perf.reset_index().set_index([RPT_COL_ACCOUNT, RPT_COL_CAMPAIGN]).sort_index()
# calculate features
df_group_perf[RPT_COL_COST_PER_CONV] = (df_group_perf[RPT_COL_PUB_COST] / df_group_perf[RPT_COL_CONV])
df_group_perf[RPT_COL_ROAS] = df_group_perf[RPT_COL_REVENUE] / df_group_perf[RPT_COL_PUB_COST]
df_group_perf[RPT_COL_CONV_RATE] = df_group_perf[RPT_COL_CONV] / df_group_perf[RPT_COL_CLICKS]
df_group_perf[RPT_COL_AVG_CPC] = (df_group_perf[RPT_COL_PUB_COST] / df_group_perf[RPT_COL_CLICKS])
## Define Anomaly Fuctions
# Finds anomalies using a certain function (e.g. sigma rule, IRQ etc.)
# data: DataFrame
# Dataset with features
# func: func
# Function to use to find anomalies
# features: list
# Feature list
# thresh: int
# Threshold value (e.g. 2/3 * sigma, 2/3 * IRQ)
# Returns: tuple
def get_feature_anomalies(data, func, features=None, thresh=3):
if features:
features_to_check = features
else:
features_to_check = data.columns
outliers_over = pd.Series(data=[False] * data.shape[0], index=data[features_to_check].index, name='is_outlier')
outliers_under = pd.Series(data=[False] * data.shape[0], index=data[features_to_check].index, name='is_outlier')
anomalies_summary = {}
for feature in features_to_check:
anomalies_mask_over, anomalies_mask_under, upper_bound, lower_bound = func(data, feature, thresh=thresh)
anomalies_mask_combined = pd.concat([anomalies_mask_over, anomalies_mask_under], axis=1).any(axis=1)
anomalies_summary[feature] = [upper_bound, lower_bound, sum(anomalies_mask_combined), 100*sum(anomalies_mask_combined)/len(anomalies_mask_combined)]
outliers_over[anomalies_mask_over[anomalies_mask_over].index] = True
outliers_under[anomalies_mask_under[anomalies_mask_under].index] = True
# print("anomalies_mask_combined: ", anomalies_mask_combined)
# print("Outliers: ", outliers)
anomalies_summary = pd.DataFrame(anomalies_summary).T
anomalies_summary.columns=['upper_bound', 'lower_bound', 'anomalies_count', 'anomalies_percentage']
anomalies_ration = round(anomalies_summary['anomalies_percentage'].sum(), 2)
# print(f'Total Outliers Ration: {anomalies_ration} %')
return anomalies_summary, outliers_over, outliers_under
# Finds outliers/anomalies using IRQ
# data: DataFrame
# col: str
# thresh: int
# Number of IRQ to apply
# Returns: Series
# Boolean Series Mask of outliers
def is_anomaly_irq(data, col, thresh):
IRQ = data[col].quantile(0.75) - data[col].quantile(0.25)
upper_bound = data[col].quantile(0.75) + (thresh * IRQ)
lower_bound = data[col].quantile(0.25) - (thresh * IRQ)
# print("IRQ calc: ", col, IRQ, upper_bound, lower_bound)
# anomalies_mask = pd.concat([data[col] > upper_bound, data[col] < lower_bound], axis=1).any(1)
anomalies_mask_over = data[col] > upper_bound
anomalies_mask_under = data[col] < lower_bound
# print("Anomalies mask: ", (anomalies_mask_over, anomalies_mask_under))
return anomalies_mask_over, anomalies_mask_under, upper_bound, lower_bound
def find_peer_anomaly(df_slice, features, irq_threshold=1.8, outliers_desired=(True, True)):
(want_outliers_over, want_outliers_under) = outliers_desired
if (df_slice.shape[0] < 3):
return
idx = df_slice.index.unique()
df_slice.reset_index(inplace=True)
anomalies_summary_irq, outlier_over_irq, outlier_under_irq = get_feature_anomalies( \
df_slice, \
func=is_anomaly_irq, \
features=features, \
thresh=irq_threshold)
median_cost = df_slice[RPT_COL_PUB_COST].median()
# print(f"over: {outlier_over_irq}")
# print("under: {outlier_under_irq}")
# include over/under outliers as desired
is_outlier_irq = np.logical_or(
np.logical_and(want_outliers_over, outlier_over_irq),
np.logical_and(want_outliers_under, outlier_under_irq)
)
# print("is_outlier\\n", is_outlier_irq)
# ignore anomaly from low spend adgroups (greater than campaign median)
is_outlier_irq = np.logical_and(is_outlier_irq, df_slice[RPT_COL_PUB_COST] > median_cost)
if sum(is_outlier_irq) > 0:
print(">>> ANOMALY", idx)
print(anomalies_summary_irq)
cols = [RPT_COL_GROUP, RPT_COL_PUB_COST, RPT_COL_CONV, RPT_COL_REVENUE] + features
print(df_slice.loc[is_outlier_irq, cols])
return is_outlier_irq
## Find CPA Anomalies
print("input shape:", df_group_perf.shape)
df_anomalies = pd.DataFrame()
# annotate via Marin Dimensions
def rowFunc(row):
return 'CPA ${:,.2f} is much higher than campaign avg ${:,.2f}'.format(
row[RPT_COL_COST_PER_CONV], \
row[RPT_COL_COST_PER_CONV + '_median']
)
for campaign_idx in df_group_perf.index.unique():
df_campaign = df_group_perf.loc[[campaign_idx]].copy()
df_campaign[RPT_COL_COST_PER_CONV + '_median'] = df_campaign[RPT_COL_COST_PER_CONV].mean()
df_campaign[BULK_COL_AUTOMATION_OUTLIER] = np.nan
outliers = find_peer_anomaly(df_campaign, [RPT_COL_COST_PER_CONV], irq_threshold=2, outliers_desired=(True,False))
if outliers is not None and sum(outliers) > 0:
df_outliers = df_campaign.loc[outliers].copy()
df_outliers[BULK_COL_AUTOMATION_OUTLIER] = df_outliers.apply(rowFunc, axis=1)
print(df_outliers)
df_anomalies = pd.concat([df_anomalies, df_outliers], axis=0)
## Prepare Output
print(tableize(df_anomalies))
# Check if the columns exist in df_anomalies
cols_to_check = [RPT_COL_ACCOUNT, RPT_COL_CAMPAIGN, RPT_COL_GROUP, BULK_COL_AUTOMATION_OUTLIER]
missing_cols = [col for col in cols_to_check if col not in df_anomalies.columns]
if missing_cols:
print(f"The following columns are missing in df_anomalies: {missing_cols}")
else:
outputDf = df_anomalies[cols_to_check]#
Post generated on 2024-05-15 07:44:05 GMT