A decade ago I started writing software for marketing teams. Most of the people I worked with then would not have called what I did "engineering." Some of the people I work with now would. The discipline grew up in those years, but the core of the work hasn't changed much.
If you build the technical layer behind growth, here's what the years taught me.
It's its own thing
Marketing engineers are not data engineers. They're not full-stack web developers. They're not analysts. The work overlaps with all three and is none of them.
You spend a lot of time at the seams between marketing tools, the website, the data warehouse, and the product database. You're the person who has to know why the conversion number on the dashboard doesn't match the conversion number in the CRM. You're the person who builds the join key.
That seam-work is the job. The faster you accept that the seams are the work, the better you'll get at it.
Boring infrastructure beats clever automation
The single best decision I keep making is to use the simplest possible piece of infrastructure that gets the job done.
A reliable cron job that loads data into a warehouse table is worth more than a real-time pipeline that sometimes fails silently. A documented event schema that the whole team understands is worth more than a clever auto-instrumentation library that ships extra fields you don't track.
If you can describe to a marketer how the system works in five sentences, the system will probably still be working in two years.
Server-side, when you can
Browser-based tracking is a leaky bucket and the leaks have gotten worse every year. Cookie restrictions, ad blockers, ITP, the entire trust environment around the browser.
If your conversion data has to be defensible, get it server-side as quickly as you can in the path. Use the browser for what only the browser can do (UI events, page context) and ship the actual fact-of-conversion from the server.
This used to be optional. It hasn't been optional for a few years now.
Attribution is still unsolved. Treat it that way.
There is no attribution model that is "correct." There are models that are useful for specific decisions, and there are models that lie convincingly.
Picking an attribution model is a business decision dressed up as a technical one. The marketing team needs to own which model they're using and what it's for. The engineering job is to make the data accurate enough that the model isn't garbage in.
Anyone who tells you they have "true" attribution is selling something.
LLMs changed the work in places
Some pieces of marketing engineering have been quietly reshaped by LLMs in the last couple of years. Drafting and personalizing copy at scale. Cleaning messy data. Classifying free-text inputs. Summarizing call transcripts and survey responses. Generating reasonable starting points for emails and landing pages.
What hasn't changed: the underlying plumbing, the schema, the join keys, the instrumentation.
LLMs sit on top of clean data. They don't fix dirty data, and they make it easier to ship something that looks polished while being subtly wrong. That's a real problem worth being honest about.
The discipline matured. The fundamentals didn't. Boring infrastructure, clean schemas, and server-side truth still beat clever tooling on top of dirty data.
What I reach for first on a new stack
- Server-side event tracking with a documented schema, before any browser-side work
- A warehouse with a clean events table, before any dashboards
- A reproducible model of "conversion" that lives in code, not in the head of one person
- Identity resolution as early as possible, ideally before it's a fire
- Documentation that names the join key and what each event means
Ten years in, the boring stuff is still the most valuable stuff. That's not changing.