Script 375: Adjust for KW Headroom

Purpose

The script adjusts keyword headroom based on historical performance data to optimize budget allocation.

To Elaborate

The Python script is designed to optimize keyword headroom adjustments by analyzing historical performance data. It processes a dataset containing advertising metrics such as publication cost, revenue, impressions, and more, filtering out data older than two days. The script calculates the Return on Advertising Spend (ROAS) and impression share for each keyword, match type, account, campaign, and group combination. Based on these calculations, it determines the keyword headroom adjustment, ensuring that it is non-negative. The final output is a filtered dataset where the impression share is less than 95%, indicating potential areas for budget reallocation to improve advertising efficiency.

Walking Through the Code

  1. Data Preparation and Filtering
    • The script begins by defining constants for various column names used in the dataset.
    • It calculates the date two days ago and filters the input data to exclude entries older than this date.
  2. Data Grouping and Aggregation
    • The filtered data is grouped by keyword, match type, account, campaign, and group.
    • Aggregation functions are applied to calculate the sum of publication cost, revenue, impressions, and impression opportunities.
  3. Calculating Metrics
    • The script calculates the ROAS by dividing the total revenue by the total publication cost.
    • It also computes the impression share by dividing the total impressions by the total impression opportunities.
  4. Adjusting Keyword Headroom
    • The keyword headroom adjustment is calculated based on the ROAS, ensuring it is non-negative.
    • The script filters the grouped data to include only those entries with an impression share of less than 95%.
  5. Output Preparation
    • The final output is prepared by selecting relevant columns and applying the calculated keyword headroom adjustments.

Vitals

  • Script ID : 375
  • Client ID / Customer ID: 1306916781 / 13095968
  • Action Type: Bulk Upload
  • Item Changed: Keyword
  • Output Columns: Account, Campaign, Group, Keyword, Match Type, KW Headroom Adj
  • Linked Datasource: M1 Report
  • Reference Datasource: None
  • Owner: Jeremy Brown (jbrown@marinsoftware.com)
  • Created by Jeremy Brown on 2023-10-17 15:51
  • Last Updated by Tom Lowes on 2023-12-06 04:01
> See it in Action

Python Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
RPT_COL_CLIENT = 'Client'
RPT_COL_CURRENCY = 'Currency'
RPT_COL_KEYWORD = 'Keyword'
RPT_COL_DATE = 'Date'
RPT_COL_STATUS = 'Status'
RPT_COL_MATCH_TYPE = 'Match Type'
RPT_COL_PUBLISHER = 'Publisher'
RPT_COL_ACCOUNT = 'Account'
RPT_COL_CAMPAIGN = 'Campaign'
RPT_COL_GROUP = 'Group'
RPT_COL_CAMPAIGN_STATUS = 'Campaign Status'
RPT_COL_GROUP_STATUS = 'Group Status'
RPT_COL_REVENUE_CAMPAIGNTYPE = 'Revenue Campaign Type'
RPT_COL_KW_HEADROOMADJ = 'KW Headroom Adj'
RPT_COL_PUB_COST = 'Pub. Cost $'
RPT_COL_REVENUE = 'Revenue $'
RPT_COL_ROAS = 'ROAS'
RPT_COL_CONV = 'Conv.'
RPT_COL_IMPR = 'Impr.'
RPT_COL_IMPR_OPS = 'Impression Ops'
RPT_COL_KEYWORD_ID = 'Keyword ID'
RPT_COL_COST_PER_CONV = 'Cost/Conv.'
RPT_COL_CONV_RATE = 'Conv. Rate %'
BULK_COL_ACCOUNT = 'Account'
BULK_COL_CAMPAIGN = 'Campaign'
BULK_COL_GROUP = 'Group'
BULK_COL_KEYWORD = 'Keyword'
BULK_COL_MATCH_TYPE = 'Match Type'
BULK_COL_KW_HEADROOMADJ = 'KW Headroom Adj'

outputDf[BULK_COL_KW_HEADROOMADJ] = "<<YOUR VALUE>>"




today = datetime.datetime.now(CLIENT_TIMEZONE).date()

# print(tableize(inputDf))


two_days_ago =  pd.Timestamp(today - datetime.timedelta(days=2))
print(two_days_ago)


filtered_df = inputDf[inputDf[RPT_COL_DATE] < two_days_ago]

print(tableize(filtered_df))

grouped_df = filtered_df.groupby([
    RPT_COL_KEYWORD,  
    RPT_COL_MATCH_TYPE, 
    RPT_COL_ACCOUNT,
     RPT_COL_CAMPAIGN, 
     RPT_COL_GROUP
     ]).agg({RPT_COL_PUB_COST: 'sum', RPT_COL_REVENUE: 'sum'
    , RPT_COL_IMPR: 'sum',  RPT_COL_IMPR_OPS: 'sum' })

# Reset the index to make the grouping columns standalone columns
grouped_df = grouped_df.reset_index()


grouped_df[RPT_COL_ROAS] = grouped_df[RPT_COL_REVENUE] / grouped_df[RPT_COL_PUB_COST]

grouped_df['Impr_Share'] = grouped_df[RPT_COL_IMPR] / grouped_df[RPT_COL_IMPR_OPS]


#grouped_df[RPT_COL_KW_HEADROOMADJ] = int(round(((grouped_df[RPT_COL_ROAS] / 1)-1)*100,0))
grouped_df[RPT_COL_KW_HEADROOMADJ] = grouped_df[RPT_COL_ROAS].apply(lambda x: int(round(((x / 1)-1)*100,0)))

grouped_df.loc[grouped_df[RPT_COL_KW_HEADROOMADJ] <= 0, RPT_COL_KW_HEADROOMADJ] = 0

# new_names = {RPT_COL_ROAS: 'Promotion', RPT_COL_REVENUE: 'Plan Type', RPT_COL_PUB_COST:'Temp for New Keywords'}
# grouped_df.rename(columns=new_names, inplace=True)  # The inplace=True modifies the DataFrame in place

outputDf = grouped_df[grouped_df['Impr_Share'] < 0.95]

Post generated on 2024-11-27 06:58:46 GMT

comments powered by Disqus