Script 737: Profit Maximising Campaign Target (Preview)

Purpose:

The script analyzes campaign-level forecasts to identify profit-maximizing targets and generates a bulk sheet to update these targets for Google Smart Bidding strategies.

To Elaborate

The Python script is designed to optimize advertising campaigns by analyzing forecasts and determining the most profitable targets for Google Smart Bidding strategies, such as TargetCPA, TargetROAS, MaximizeConversions, and MaximizeConversionValue. It processes campaign data to identify the optimal bidding targets that maximize daily profit while considering constraints like target CPA and ROAS. The script filters and processes data to ensure only relevant campaigns are considered, applies damping to targets to smooth changes, and generates a bulk sheet for updating campaign targets. The script is configurable, allowing users to set parameters like target tolerance and folder IDs to limit the scope of analysis. The output is a refined dataset that highlights campaigns with significant changes in targets, ready for bulk updates.

Walking Through the Code

  1. Configuration and Setup:
    • The script begins by defining configurable parameters such as TARGET_TOLERANCE and VAL_LIMIT_FOLDERS.
    • It checks if the script is running on a server or locally, loading data from a pickle file if running locally.
  2. Data Preparation:
    • The script loads campaign data into a DataFrame, inputDf, and filters it based on folder IDs and campaign overrides.
    • It ensures data types are correct for columns like FEED_COL_FOLDER_ID and FEED_COL_CAMPAIGN_OVERRIDE.
  3. Optimization Process:
    • The script identifies the optimal targets by sorting campaigns based on daily profit and other metrics.
    • It applies damping to the targets using numpy to ensure changes are gradual and within specified limits.
  4. Result Compilation:
    • The script compiles results into results_df, focusing on campaigns with significant changes.
    • It uses the select_changed function to filter for campaigns with updated targets, ensuring only relevant changes are included in the output.
  5. Output Generation:
    • The final output is prepared by renaming columns to match bulk sheet requirements and filtering out unchanged rows.
    • If running locally, the script writes the output and debug data to CSV files for further analysis.

Vitals

  • Script ID : 737
  • Client ID / Customer ID: 1306927569 / 60270325
  • Action Type: Bulk Upload (Preview)
  • Item Changed: Campaign
  • Output Columns: Account, Campaign, Publisher Target CPA-ROAS Damped, Publisher Target CPA-ROAS UnDamped
  • Linked Datasource: FTP/Email Feed
  • Reference Datasource: None
  • Owner: Joe Southin (jsouthin@marinsoftware.com)
  • Created by Joe Southin on 2024-03-04 14:23
  • Last Updated by Joe Southin on 2025-01-13 09:27
> See it in Action

Python Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
##
## name: Profit Maximising Campaign Target Trafficking
## description:
##  * Analyses campaign-level forecasts to identify profit maximising targets.  Generates bulk sheet to update targets
##  * Supports Google Smart Bidding targets only (TargetCPA, TargetROAS, MaximizeConversions, MaximizeConversionValue)
## 
## author: Joe Southin
## created: 2024-02-24
## updated: 2024-02-29  

##### Configurable Param #####
# Define tolerance of deviation from expected target
TARGET_TOLERANCE = 0.02
VAL_LIMIT_FOLDERS = [778910]
VAL_TARGET_MODE = 'Preview'
##############################


########### START - Local Mode Config ###########
# Step 1: Uncomment download_preview_input flag and run Preview successfully with the Datasources you want
download_preview_input=True
# Step 2: In MarinOne, go to Scripts -> Preview -> Logs, download 'dataSourceDict' pickle file, and update pickle_path below
pickle_path = '/Users/jsouthin/Downloads/datasource_dict_1709114229331.pkl'
# Step 3: Copy this script into local IDE with Python virtual env loaded with pandas and numpy.
# Step 4: Run locally with below code to init dataSourceDict

# determine if code is running on server or locally
def is_executing_on_server():
    try:
        # Attempt to access a known restricted builtin
        dict_items = dataSourceDict.items()
        return True
    except NameError:
        # NameError: dataSourceDict object is missing (indicating not on server)
        return False

local_dev = False

if is_executing_on_server():
    print("Code is executing on server. Skip init.")
elif len(pickle_path) > 3:
    print("Code is NOT executing on server. Doing init.")
    local_dev = True
    # load dataSourceDict via pickled file
    import pickle
    dataSourceDict = pickle.load(open(pickle_path, 'rb'))

    # print shape and first 5 rows for each entry in dataSourceDict
    for key, value in dataSourceDict.items():
        print(f"Shape of dataSourceDict[{key}]: {value.shape}")
        # print(f"First 5 rows of dataSourceDict[{key}]:\n{value.head(5)}")

    # set outputDf same as inputDf
    inputDf = dataSourceDict["1"]
    outputDf = inputDf.copy()

    # setup timezone
    import datetime
    # Chicago Timezone is GMT-5. Adjust as needed.
    CLIENT_TIMEZONE = datetime.timezone(datetime.timedelta(hours=0))

    # import pandas
    import pandas as pd
    import numpy as np

    # Printing out the version of Python, Pandas and Numpy
    # import sys
    # python_version = sys.version
    # pandas_version = pd.__version__
    # numpy_version = np.__version__

    # print(f"python version: {python_version}")
    # print(f"pandas version: {pandas_version}")
    # print(f"numpy version: {numpy_version}")

    # other imports
    import re
    import urllib

    # import Marin util functions
    from marin_scripts_utils import tableize, select_changed
else:
    print("Running locally but no pickle path defined. dataSourceDict not loaded.")
    exit(1)
########### END - Local Mode Setup ###########

today = datetime.datetime.now(CLIENT_TIMEZONE).date()

# primary data source and columns
inputDf = dataSourceDict["1"]
FEED_COL_ACCOUNT = 'Account'
FEED_COL_CAMPAIGN = 'Campaign'
FEED_COL_DAILY_BUDGET = 'daily_budget'
FEED_COL_BUDGET_TYPE = 'budget_type'
FEED_COL_CAMPAIGN_ID = 'campaign_id'
FEED_COL_CAMPAIGN_OVERRIDE = 'campaign_override'
FEED_COL_BIDDING_STRATEGY_OLD_TYPE = 'bidding_strategy_old_type'
FEED_COL_TARGET_CPA = 'target_cpa'
FEED_COL_TARGET_ROAS = 'target_roas'
FEED_COL_IS_PORTFOLIO = 'is_portfolio'
FEED_COL_IS_SHARED_BUDGET = 'is_shared_budget'
FEED_COL_FOLDER_ID = 'folder_id'
FEED_COL_START_DATE_TIME_ID = 'start_date_time_id'
FEED_COL_END_DATE_TIME_ID = 'end_date_time_id'
FEED_COL_OBJECTIVE_UNIT = 'objective_unit'
FEED_COL_CONSTRAINT_UNIT = 'constraint_unit'
FEED_COL_CONSTRAINT_VALUE = 'constraint_value'
FEED_COL_BID_AND_TARGET_DAMPING = 'bid_and_target_damping'
FEED_COL_BUDGET_DAMPING = 'budget_damping'
FEED_COL_BUDGET_CAP = 'budget_cap'
FEED_COL_FOLDER_BID_TRAFFICKING = 'folder_bid_trafficking'
FEED_COL_PUBLISHER_BUDGET_TRAFFICKING = 'publisher_budget_trafficking'
FEED_COL_PUBLISHER_BID_TRAFFICKING = 'publisher_bid_trafficking'
FEED_COL_BOOST_PERCENT = 'boost_percent'
FEED_COL_ABS_BOOST_PERCENT = 'abs_boost_percent'
FEED_COL_DIRECTION = 'direction'
FEED_COL_TARGET = 'target'
FEED_COL_DAILY_SPEND = 'daily_spend'
FEED_COL_DAILY_GROSS_PROFIT = 'daily_gross_profit'
FEED_COL_DAILY_PROFIT = 'daily_profit'
FEED_COL_DAYS = 'days'

# output columns and initial values
BULK_COL_ACCOUNT = 'Account'
BULK_COL_CAMPAIGN = 'Campaign'
BULK_COL_PUBLISHER_TARGET_CPA = 'Publisher Target CPA'
BULK_COL_PUBLISHER_TARGET_ROAS = 'Publisher Target ROAS'
outputDf[BULK_COL_PUBLISHER_TARGET_CPA] = "<<YOUR VALUE>>"
outputDf[BULK_COL_PUBLISHER_TARGET_ROAS] = "<<YOUR VALUE>>"
BULK_COL_DIM_DAMPED = 'Publisher Target CPA-ROAS Damped'
BULK_COL_DIM_UNDAMPED = 'Publisher Target CPA-ROAS UnDamped'

### User Code Starts Here

# intermediate columns
COL_DAMPENED_TARGET_CPA = FEED_COL_TARGET_CPA + '_dampened'
COL_DAMPENED_TARGET_ROAS = FEED_COL_TARGET_ROAS + '_dampened'
COL_EXPECTED = 'expected'
COL_ACCEPTABLE = 'acceptable'

# define Status values
VAL_STATUS_ACTIVE = 'Active'
VAL_STATUS_PAUSED = 'Paused'
VAL_BLANK = ''

# define ObjectiveUnit values
VAL_OBJECTIVE_UNIT_UNKNOWN = -1
VAL_OBJECTIVE_UNIT_CONVERSIONS = 1
VAL_OBJECTIVE_UNIT_REVENUE = 2
VAL_OBJECTIVE_UNIT_NONE = 3
 
VAL_CONSTRAINT_UNIT_UNKNOWN = -1
VAL_CONSTRAINT_UNIT_SPEND = 1
VAL_CONSTRAINT_UNIT_ROAS = 2
VAL_CONSTRAINT_UNIT_CPA = 3

print("inputDf shape", inputDf.shape)
print("inputDf info", inputDf.info())

## cleanup data
#inputDf.drop(columns=['Unnamed: 30'], inplace=True)

inputDf[FEED_COL_FOLDER_ID] = inputDf[FEED_COL_FOLDER_ID].astype(int)
inputDf[FEED_COL_CAMPAIGN_OVERRIDE] = inputDf[FEED_COL_CAMPAIGN_OVERRIDE].astype(int)


inputDf = inputDf[inputDf[FEED_COL_FOLDER_ID].isin(VAL_LIMIT_FOLDERS)]
inputDf = inputDf[inputDf[FEED_COL_CAMPAIGN_OVERRIDE] != 1]

if VAL_TARGET_MODE == 'Traffic':
    inputDf = inputDf[inputDf[FEED_COL_PUBLISHER_BID_TRAFFICKING]=='Traffic']


## force expected types
inputDf.loc[:,[FEED_COL_TARGET_CPA]] = pd.to_numeric(inputDf[FEED_COL_TARGET_CPA], errors='coerce')
inputDf.loc[:,[FEED_COL_TARGET_ROAS]] = pd.to_numeric(inputDf[FEED_COL_TARGET_ROAS], errors='coerce')


## Grab current target stored on zero boost; basis for change detection later
comparison_cols = [FEED_COL_ACCOUNT, FEED_COL_CAMPAIGN, FEED_COL_TARGET_CPA, FEED_COL_TARGET_ROAS]

original_df = inputDf.loc[inputDf[FEED_COL_BOOST_PERCENT] == 0, comparison_cols]

## maximize profit for picking out boost with highest daily profit
optimal_df = inputDf \
    .sort_values(by=[ \
                    FEED_COL_DAILY_PROFIT, \
                    FEED_COL_DAILY_GROSS_PROFIT, \
                    FEED_COL_ABS_BOOST_PERCENT, \
                    FEED_COL_TARGET
                ], \
                ascending=[ \
                    False, \
                    False, \
                    True, \
                    True \
                ]) \
    .groupby([FEED_COL_CAMPAIGN_ID, FEED_COL_FOLDER_ID]) \
    .first() \
    .reset_index()

## Apply damping to the ideal targets based on current target and damping percentage using numpy
optimal_df[COL_DAMPENED_TARGET_CPA] = np.where(optimal_df[FEED_COL_TARGET],
    np.where(
    optimal_df[FEED_COL_BIDDING_STRATEGY_OLD_TYPE].isin(['TargetCPA','MaximizeConversions']),
    np.clip(
        optimal_df[FEED_COL_TARGET],
        optimal_df[FEED_COL_TARGET_CPA] * (1 - optimal_df[FEED_COL_BID_AND_TARGET_DAMPING]/100),
        optimal_df[FEED_COL_TARGET_CPA] * (1 + optimal_df[FEED_COL_BID_AND_TARGET_DAMPING]/100)
        ).round(2),
    np.nan),optimal_df[FEED_COL_TARGET].round(2))


optimal_df[COL_DAMPENED_TARGET_ROAS] = np.where(optimal_df[FEED_COL_TARGET],
    np.where(
    optimal_df[FEED_COL_BIDDING_STRATEGY_OLD_TYPE].isin(['TargetROAS','MaximizeConversionValue']),
    np.clip(
        100 * optimal_df[FEED_COL_TARGET],
        optimal_df[FEED_COL_TARGET_ROAS] * (1 - optimal_df[FEED_COL_BID_AND_TARGET_DAMPING]/100),
        optimal_df[FEED_COL_TARGET_ROAS] * (1 + optimal_df[FEED_COL_BID_AND_TARGET_DAMPING]/100)
        ).round(0),
    np.nan),(100 * optimal_df[FEED_COL_TARGET]).round(0))



optimal_df[COL_EXPECTED] = np.where(optimal_df[FEED_COL_BIDDING_STRATEGY_OLD_TYPE].isin(['MaximizeConversions','TargetCPA']),
    round(optimal_df[FEED_COL_TARGET_CPA]*(100 + optimal_df[FEED_COL_BOOST_PERCENT])/100,2),
    round((1/100)*optimal_df[FEED_COL_TARGET_ROAS] * 100 / (100 + optimal_df[FEED_COL_BOOST_PERCENT]),2))
                                                                       
optimal_df[COL_ACCEPTABLE] = optimal_df[FEED_COL_TARGET].between(
    (1 - TARGET_TOLERANCE) * optimal_df[COL_EXPECTED],
    (1 + TARGET_TOLERANCE) * optimal_df[COL_EXPECTED]
)

optimal_df[BULK_COL_DIM_DAMPED] = np.where(optimal_df[FEED_COL_BIDDING_STRATEGY_OLD_TYPE].isin(['MaximizeConversions','TargetCPA']),optimal_df[COL_DAMPENED_TARGET_CPA].round(2),optimal_df[COL_DAMPENED_TARGET_ROAS].round(0))
optimal_df[BULK_COL_DIM_UNDAMPED] = np.where(optimal_df[FEED_COL_BIDDING_STRATEGY_OLD_TYPE].isin(['MaximizeConversions','TargetCPA']),optimal_df[FEED_COL_TARGET].round(2), (100 * optimal_df[FEED_COL_TARGET]).round(0))


results_cols = [FEED_COL_ACCOUNT, FEED_COL_CAMPAIGN, COL_DAMPENED_TARGET_CPA, COL_DAMPENED_TARGET_ROAS, COL_ACCEPTABLE, FEED_COL_BOOST_PERCENT, BULK_COL_DIM_DAMPED, BULK_COL_DIM_UNDAMPED]
results_df = optimal_df.loc[:, results_cols] \
                    .rename(columns = { \
                        COL_DAMPENED_TARGET_CPA: FEED_COL_TARGET_CPA, \
                        COL_DAMPENED_TARGET_ROAS: FEED_COL_TARGET_ROAS, \
                    }) \
                       .copy()


debugDf = optimal_df

# only include changed rows in bulk file
(outputDf, _) = select_changed(results_df, \
                                    original_df, \
                                    diff_cols = [ \
                                        FEED_COL_TARGET_CPA, \
                                        FEED_COL_TARGET_ROAS, \
                                    ], \
                                    select_cols = [ \
                                        FEED_COL_ACCOUNT, \
                                        FEED_COL_CAMPAIGN, \
                                        FEED_COL_TARGET_CPA, \
                                        FEED_COL_TARGET_ROAS, \
                                        COL_ACCEPTABLE, \
                                        FEED_COL_BOOST_PERCENT, \
                                        BULK_COL_DIM_DAMPED, \
                                        BULK_COL_DIM_UNDAMPED, \
                                    ], \
                                    merged_cols=[FEED_COL_ACCOUNT, FEED_COL_CAMPAIGN] \
                                    )

outputDf = outputDf[outputDf[COL_ACCEPTABLE]].drop(COL_ACCEPTABLE,axis=1)

outputDf = outputDf[outputDf[FEED_COL_BOOST_PERCENT]!=0].drop(FEED_COL_BOOST_PERCENT,axis=1)

# use Bulk column headers
outputDf = outputDf.rename(columns = { \
                    FEED_COL_ACCOUNT: BULK_COL_ACCOUNT, \
                    FEED_COL_CAMPAIGN: BULK_COL_CAMPAIGN, \
                    FEED_COL_TARGET_CPA: BULK_COL_PUBLISHER_TARGET_CPA, \
                    FEED_COL_TARGET_ROAS: BULK_COL_PUBLISHER_TARGET_ROAS, \
                })

print("outputDf shape", outputDf.shape)
print("outputDf", tableize(outputDf.tail(5)))

## local debug
if local_dev:
    output_filename = 'outputDf.csv'
    outputDf.to_csv(output_filename, index=False)
    print(f"Local Dev: Output written to: {output_filename}")

    debug_filename = 'debugDf.csv'
    debugDf.to_csv(debug_filename, index=False)
    print(f"Local Dev: Debug written to: {debug_filename}")

Post generated on 2025-03-11 01:25:51 GMT

comments powered by Disqus