Deane Barker is the Global Director of Content Management for Optimizely.
Content is awesome, but, in the end, artifacts are what get delivered.
We create content, then we turn it into an "artifact" that humans can consume. That artifact might be a web page, a social media update, a tweet, whatever. But at some point, content is combined with presentation, and an artifact is generated and presented to a human for consumption.
Where this combination happens and how it's delivered has shifted over the last decade. As we'll discuss below, it used to be fairly universal, but the delivery environment has fractured wildly. As with any technology shift, there are lots of preferences, biases, beliefs, dogmas, and claims being thrown around.
Maybe you've been told that headless is "the only way" or how it's "a modern framework," or you've been extolled dozens of benefits it might bestow upon you if the stars align just right. At this point, you don't know who or what to believe or how it might impact you or your project.
So, let's unpack it all.
(Note: if you're a web veteran or a developer, you can probably skip "Definitions" and "History").
"Content management" is a pretty abused phrase. It's a massive umbrella that refers to so many things. We've stretched this about as far as we can, yet it persists.
The biggest dichotomy is in management vs. delivery. There are a lot of things that happen under the moniker of "management." This is where most of the work of content happens – we model it, we create it, we review it, we secure it, we aggregate it, we schedule it, and then we publish it.
Everything that happens after that is "delivery." I know… that's a broad brush, but let's stick with this definition for now. Everything to the hypothetical "left" of the publish button is management, and everything to the hypothetical "right" of the publish button is delivery.
The other dialectical we need to understand is the server and the browser. The server exists in "the cloud." The browser contacts the server to request content.
The server and the browser are the two main parties to every web interaction, and delivery architectures largely vary based on the conversation that happens between these two actors.
We'll talk about “server” and “browser” quite a bit below. In general terms, understand that the headless trend has been moving work from the server to the browser.
The "last mile of delivery" is how you get content formed into an artifact and in front of a human.
This used to be universal: we created a string of HTML on the server (HTML is a language that describes everything on a web page), and sent it to the browser to be rendered. This made things like personalization easy – since all our personalization logic was on the server, and that's where we were rendering the HTML, we just did everything there.
Users browsed from page to page. A page loaded, they consumed it, then they clicked a link or submitted a form, and a new page loaded. All the "logic" was done on the server – all the browser did was request and display one page after another.
But things changed.
Instead of loading a formatted page and it just sitting there until the next page was loaded, we wanted to just update parts of the page. We achieved this with a technology called AJAX, in which a page could reach back to the server behind the scenes, get some content, and then magically just change one small part of itself.
This segued into a desire to transcend the concept of the "page" entirely. Instead of navigating from page to page – and the user being cognizant of that segmentation – we wanted our experience to become a flowing, unbroken stream of content. Somewhere along the line, we became averse to the concept of loading a new page at all.
This became known as a Single-Page Application, or SPA. In this scenario, only one full page ever loads, and every other "page" is really just that first page smoothly modifying itself over and over. With this, the "logic" of the application definitely moved from the server to the browser.
Complex frameworks emerged to enable this – things like React, Vue, and Angular. Newer protocols became popular –JSON and GraphQL. A massive ecosystem developed around the idea that the logic in a web experience should happen in the browser, not on the server. The server should just blindly serve up raw data, and the browser would decide all the logic to how it displays and interacts.
Over time, this had ripple effects on how web development teams were organized. Where you used to simply have "developers," the concept of a "front-end developer" emerged. This developer's responsibility was to stay on top of the ever-increasing stack of technologies happening in the browser. The "back-end developer" was responsible for simply serving up raw data.
Management vs. delivery, server vs. browser – back to those dialecticals again. Back-end (server) developers became sequestered into management, while front-end (browser) developers handed delivery.
As the delivery ecosystem expanded, the tooling started to fracture. Where there used to be one accepted way to do something, suddenly there were two or three, and then dozens. It became impossible to stay on top of all the tools and frameworks.
What was once universal became diverse and idiosyncratic.
Let's assume your content has been comprehensively managed and is ready to go. The "logic" of delivery still involves a lot of things you still need to figure out:
The most basic question is: do we do this on the server, in the browser, or some combination of the two?
As I mentioned above, this used to solely happen on the server. The server in the cloud would perform and all this logic, then deliver the finished page to the browser. Then the server would sit around and wait for the user to click on something, thereby asking for another page.
But as front-end development and the browser began to increase in importance, a lot of this logic started to move into the browser. The CMS on the server became... dumber? That seems pejorative, but in many cases, the CMS became just a simple cloud data storage system that would return data requested by the logical process now happening in the browser.
A new generation of CMSs emerged to handle this – the "headless CMS." In this vernacular, delivery was the "head" and that was no longer the concern of the CMS. These systems just managed content. Delivery was handled some other way. The CMS no longer cared.
With that said, let's start with the two most basic delivery options:
This is the traditional method of delivery – the one the web was founded on, and which the majority of existing sites still use. HTML is created using server logic to represent the requested page, and that's sent to the browser, which simply renders what it's told.
The benefit is that it's quite simple. You only have one logical environment to deal with. Browsers are awfully good at rendering content, so your server can basically just tell the browser what to do.
Additionally, the server usually has access to everything it needs natively. The process that generates the output has complete access to the CMS repository, the user context, and everything else that has happened in this particular session. In most cases, the server also has access to all the other MarTech tools being used by your organization – it can contact customer data platforms, content recommendation systems, product catalogs, or anything else it needs to render the content correctly. Then, it becomes a "central integration point" and does all of it before the page leaves for the browser.
Generally speaking, it's also quite fast. Servers are mostly faster than laptops, so all the logic happens quickly. Additionally, a lot of this logic can be "cached," meaning when it's complete, the server "remembers" it and doesn't have to do it again. The second and subsequent time anyone asks for Page X, it can be delivered in milliseconds.
The drawback is that the page is effectively "inert" in the browser. It's delivered to the browser in one format and doesn't do much once it gets there. When the user wants new content, they click on a link, an entirely new (also inert) page loads, and the user is scrolled up to the top of it. It's very obvious that they've transitioned to a new page.
(See "Hybrid Headless" below for ways to mitigate this.)
In this scenario, the server simply provides raw data: mostly unformatted, unaggregated and unpersonalized. The browser does all the logical work we mentioned above.
The benefit of this approach is that the page is "active" in the browser, meaning it can more easily adapt itself to what the user chooses to do. Since the server is just handing out raw data, and the browser is turning that into something visual, small areas of the page can be updated and refreshed without the entire page reloading.
This can have some performance impacts. Without the need to load an entire new page, the website can be faster: both quantitatively and by perception. If users aren't waiting for an entire page to load, they perceive the site to be faster, even when it might not be in reality. (This isn't always true – we'll talk more about this below.)
This approach is helpful when a page is "busy," such as an ecommerce experience. When you're viewing a product, you might want to look at different colors, add it to your shopping cart, change sizes, etc. – you want to interact with the page, and the page needs to exchange data behind the scenes with the server. When you're executing code in the browser, all of these things can happen without the page having to reload every time. This often makes the site seem fast and modern – it blurs the lines between server and browser.
There can also be a considerable advantage during the site development. As mentioned above, most web development shops have split into front-end and back-end teams, and in this approach, those teams can work independently. This results in fewer bottlenecks, and the ability to recreate reusable browser components that (theoretically) work with any server.
One drawback is now you have two development environments – you're essentially creating two applications. You have the server environment and the browser environment, which has been elevated to an entire application of its own.
(This has been mitigated by the rise of SaaS CMS, which you cannot program against and therefore don't really constitute another environment you need to write code for. So, now we're back to one application again – it just moved from server to browser.)
Additionally – and confusingly – a headless site can sometimes be slower. Since we're now running a full application in the browser, the user will often need to download considerable amounts of code before the page starts "running" (we've all seen it – pages where gray "placeholder" blocks sit in the browser for 5-10 seconds before they turn into actual content; we'll talk more about this below).
Finally, browser programming frameworks have gotten very complicated. Some of the most complicated code in your project might be in the browser now. This is fine for professional services firms that have developed practices and frameworks around this, but for other customers, it can cause considerable skill gaps.
Customers sometimes feel compelled to use a "modern" approach but implement it poorly due to a lack of the required skill set, and the result is considerably worse for it. These applications can be very easy to build poorly.
If server-side or browser-side rendering was the only thing you had to choose from, this wouldn't be too hard. However, things have devolved into all sorts of shades of gray.
This is a fancy-sounding name for a simple idea: combine the best of both server- and browser-side rendering.
Consider that every conceptual "page" has a lot of content on it, and not all of that content has the same level of dynamism. Some of it doesn't change from request to request. Some of it doesn't change for years (logos, privacy policies, the basic layout, etc).
It's wasteful to do all the work of setting this stuff up for every request. Or even just for the first request by each user.
Hybrid headless says: "Let's use the server to generate stuff that doesn't need to change, and we'll use some code in the browser for the dynamic stuff."
Consider a catalog page that displays a product. Most of that page is static. The content description is mostly static, and so is the logo, main navigation, footer, etc. The static content forms a "backbone" of the page. Layered within this are smaller, dynamic elements. The "Add to Cart" button and the little shopping cart info box, for example. That might change while the user is on this page.
So, generate the static stuff on the server, and "animate" the dynamic stuff with smaller units of in-browser code.
A benefit here is that this can be much more approachable for many organizations. They don't have to completely throw away their server-side skills and learn an entirely new framework and approach, while still getting the benefit of an active browser environment.
Additionally, server-side rendering is mostly better with static content. The backbone of the page is delivered to the browser fully-formed, and the browser doesn't have to do any logic to render it. The active elements of the page also appear quickly, since they don't need to be animated until the user interacts with them. Additionally, it's easy for the server to cache this static backbone.
The drawback is that you potentially have logic in two environments: so you have considerable server-side logic, and some browser-side logic. This might be confusing.
Remember when we discussed how server-generated pages are "inert" in the browser? This is true, but remember that they're still generated specifically for every request on the server.
Static site generation takes server-side generation to the extreme: every page is generated on the server, and then held onto – meaning, when you publish new content, you basically generate an entire new copy of the website and publish it somewhere as a set of static files.
So now the pages are inert on the server too.
The advantage here is that this is very fast – there's no server logic at all. When a request comes in, the pre-generated page is simply sent out. Additionally, it's fault-tolerant. Static HTML files are likely the most reliable thing you could ever use to power a website. Remember that by the time a request is made of the page, the CMS is out of the picture. The user might request a page that the CMS created and stored a month ago.
Statically rendered sites can also be easier to scale. If all you have to deploy is a set of files without having to install a CMS or have any other server requirements, then it's easier and cheaper to create a server infrastructure to support this.
Additionally, your CMS can exist entirely apart from your delivery environment. In some cases, you can generate your entire website on a laptop, and then just deploy it to a massive delivery environment. Delivery has become completely "decoupled" from management.
The drawback is probably clear: the page is static. You cannot do any server-side logic. You can still do logic in the browser, but there is only one page on the server, and all the users are going to get it. Sometimes, this fits your requirements just fine. Sometimes it doesn't.
One thing to know is that different architectures aren't cleanly separated. They sort of overlap and merge into each other like suburbs in a sprawling metro area. You can mix and match, straddle multiple, cross over from one to another without really planning to, etc.
As headless architecture has proliferated, it's become less "pure." A headless site now is likely to be a combination of browser- and server-side technology, working together.
Consider some use cases:
There are few easy answers here. Questions of what architectures a particular CMS supports have liquefied into a gooey mess. While some systems are dogmatic about one particular system or framework, others take a "big tent" approach and let you do whatever you need.
They might be, but certainly not always.
I bring this up because it's traditionally been one of the selling points of headless – since you're able to manipulate the page in the browser, you can perform partial page updates and only transfer raw data instead of all the data and formatting code every time, so headless sites are faster.
This is theoretically true, but a lot of it breaks down in reality.
Let's back up a bit and consider that there are three "time sinks" with any page request:
First, any speed gained by headless is often offset by all the code that has to be downloaded and all the processing that has to be done in the browser. A lot of this is done on that first request, which means that real speed gains start to be realized on the second and subsequent requests.
If you have a site that the user will visit again and again – so this information can be held in their browser, and they can benefit from that speed (after paying a penalty on that first-page load) – then yes, this works. But often, that isn't the case, and your in-browser application loads and bootstraps just in time to see the user leave because they were sick of staring at pulsing gray boxes.
Second, any bandwidth savings are usually negligible, or easily solved by other means. Some time spent optimizing server-generated HTML can usually pay off with some impressive benefits. Swapping an entire architecture to save 10KB on every request is like trying to kill a mosquito with a sledgehammer.
This is exacerbated by the tendency of headless pages to be very "chatty." I once investigated a very competent React re-implement of a previously server-rendered site. I found that it made four background calls to the server to retrieve what would normally be sent in a single server-side rendered request.
Third, most parts of a web page are static anyway. So in many cases, the browser is spending a lot of time performing logic to render things that don't need that logic. In most cases, the logo is the logo is the logo – just render that code on the server and send it to everyone.
So, with headless, you often just rob Peter to pay Paul.
Less time is spent rendering the page on the server (#1), but more time is spent rendering the page in the browser (#3). You might gain a couple of milliseconds during transmission time (#2), but rarely enough to matter.
So, is headless faster? Again... maybe. It fundamentally depends on what your site does. If it's an "application" that's very interactive and can benefit by updating parts of the page rather than loading a brand new page, then likely, yes – headless will be faster.
But if you have the typical marketing site where all your content is largely static for a given request, then no – headless probably isn't going to make a difference.
I'll say that when compared to a traditional server-side rendering site, the headless equivalent has just as much chance of being slower than it does of being faster.
For most organizations, I'm going to give a pretty clear "no" to this because the required skillset for front-end development are specialized and distributed. There's no universal browser-side rendering framework. There are dozens of them, and you need to pick one, then you need to find someone who knows it or can learn it.
Additionally, right now, headless implementations are swimming upstream of the way the web has typically worked, which means there's often a lot of re-work to solve problems that have already been solved by 25 years of web content management systems. Things like URL routing, content aggregation, etc.
There are initiatives underway to make this better – things like Web Components, for example, that will provide some standardization – but we're not there yet. Right now, communicating with a headless site means you're downloading a custom application that initializes, and then requests the content to display.
To be clear, several server-side frameworks have emerged to handle some of these problems. But as more and more of these technologies layer in, it becomes clear that headless doesn't resolve complexity as much as it just moves it around.
An analogy: you can absolutely make an automobile simpler and more reliable by removing the engine, but at some point, you'll need to put an engine back in it to get anywhere. Complexity wasn't resolved by removing the engine – it was just transferred or deferred.
If you're a professional services firm – meaning, you build websites for a living – then you can realize some efficiencies through economy of scale. It's easier to build reusable front-end code with headless sites, and you can have multiple front-end developers working, supported by fewer back-end developers. But the efficiencies aren't automatic – your organization really has to develop a repeatable practice around this to realize the benefits.
If the services firm you have hired has done this, will they be able to deliver your projects faster and cheaper? Yes, there's a good chance they'll have developed some very reusable toolsets that will jumpstart projects (...though the exact same could be true for non-headless architectures). This tooling could theoretically result in faster development.
(Will that theoretical time savings be passed on to you in financial savings? Well, that's between you and them...)
Other than that, it's hard to make generalizations around this. In fact, here's the only valid generalization: be very suspicious of someone who tries to give you blanket answers here. Whether or not headless is simpler is extremely contextual.
...maybe? Sometimes? It depends?
Start by asking some questions:
Can I articulate why I want to go headless? It's fair to consider server-side rendering the "default" site architecture. So if you're considering headless, you have to have a reason – what is this? Has someone told you to go headless? Do you just want to try something new? Are you just irrationally worried about avoiding a new page load? Make sure that you clearly know your reasoning and that it holds up in the real world.
Who is building your site, and what architecture do they know and prefer? If you're building it yourself, maybe don't make things harder by working with a technology you might not understand and that has an unclear value proposition. If you're hiring a company to build your website, they're going to have a preference – they will work in the way that works best for them. You should do the same.
Will you actually realize any theoretical benefits? Once the page is loaded in the browser, is the user going to simply scroll and view, or is there considerable interactive functionality on the page that will require exchanging data with the server? If so, then headless might be worth looking at. Look critically at your content. Is there anything going on there that would actually provide a performance benefit?
Does any of this matter? Remember, your focus is on the user and their experience. Are you "majoring in the minors" here? Is a large expense of decisional and technical effort actually going to result in more goal achievement for your digital property? It's easy to lose sight of the forest for the trees.
Lots of organizations have been penny-wise and pound-foolish by deeply investing in a new architecture that no one except their internal developers ever noticed or appreciated. Know that for the average website, architectural questions like this don't even make the Top 10 list for things that actually contribute to success.
...yes. Whatever you want is fine.
That's snarky, but accurate: we do it any way you want to do it. We can support any of the four "major" and "other" options outlined here – plus all the "murky" ones – because we've evolved around delivery flexibility: we want you to work with your content any way you like.
Here are some details of different ways you can deliver Optimizely content:
Traditional server-side rendered HTML, templated by a C# developer in Razor: This is the traditional way of working with our CMS. Back-end developers like it because the templating (the combination of content and presentation) is done in a language they know and prefer. The drawback is that not everyone should be writing C#, and it tends to over-centralize development.
Traditional server-side rendered HTML, templated by a front-end developer in Liquid: One of the drawbacks of using Razor (above) is that you need to know a server-side language, and it's hard to distribute the workload between front-end and back-end developers. Liquid templating solves this by introducing a simpler, more template-friendly language that can be detached from the main project for a separate group of developers.
Headless, using the framework of your choice: We have multiple options for getting raw data from the server, including APIs using both REST and GraphQL. We have examples of our code running in React and Vue frameworks, as well as any type of bespoke browser-side programming that you can invent.
Hybrid headless, server-side rendered HTML enhanced with headless programming: This is simply a combination of the above. Clearly, our CMS can render content server-side, and this content can be enhanced with browser-side programming, which can communicate with the server using any of our remote APIs.
Static site generation: This can be accomplished in multiple ways. Lots of SSG frameworks exist, any of which can retrieve data from our server. Our event-based programming model means automated rebuilds can re-deploy the site when content changes. Content can be deployed to a simple web host, or a more complicated environment like Netlify or Vercel.
HTML-as-a-Service, using Razor or Liquid: This is an emerging model (mentioned briefly above) where a hybrid headless solution retains all its templating on the server, even for interactive elements. Partial page updates are rendered on the server, sent as HTML, and then grafted into the page in the browser. Given our natural server-side rendering abilities, this is easily accomplished with our CMS.
Optimizely CMS is largely delivery-agnostic. We help you model, create, aggregate, and manage content. We supply a full suite of delivery tools, but if you want to do something different, we'll gladly step out of the way and support your chosen framework or architecture.
It's your content. Deliver it how you want.
I absolutely don't hate headless. I've been working with headless technology since before it had a name. (Hell, I had an account on Contentful when it was still called "Storage Room").
But what I get really frustrated with is the mindless, universal application of headless. I get very, very annoyed with headless vendors who play fast and loose with promises – lots of things sound logical in theory, but don't pan out in reality. And everyone knows it (except, sadly, the client).
I also get annoyed with service providers who make promises around headless, not because it will actually benefit their customers, but because it will be easier or more profitable for them. I was in the services business, so I understand the need to make money, but I've seen more than one situation where a huge headless re-architecture was undertaken, the sole tangible result of which was a massive invoice at the end of the project. The client was left wondering what benefit they actually paid for.
The truth is that there are lots of shades of gray, both between projects and within the same project. Not all sites are the same, and not all content within the same site is the same. You need the flexibility to handle content with the level of dynamism and stability that it deserves. Sometimes this means headless; often, it doesn't.
At the end of the day, everything else being equal, flexibility always wins. Find a system that doesn't make decisions for you, and lets your delivery architectures morph from project to project, from time to time, and from content element to content element.
It'd be handy to have a system that's cool with any architecture, wouldn't it?
Most of all, don't rush into a plan just because everyone is telling you it's the right thing to do, but are short on reasons. Remember: the goal is to improve your customer's experience in such a way that results in tangible benefits to your organization. Anyone trying to convince you of something new needs to provide concrete evidence that their approach will actually move the needle in the right direction, not just magic hand-waving.
Do the research, ask the questions, and be circumspect of the answers.
CMS Critic is a proud conference partner at this second annual edition of the prestigious international Boye & Co conference, dedicated to the global CMS community. This event will bring together top-notch speakers, Boye & Company's renowned learning format, and engaging social events.
Tired of impersonal and overwhelming gatherings? Picture this event as a unique blend of masterclasses, insightful talks, interactive discussions, impactful learning sessions, and authentic networking opportunities. Prepare for an unparalleled in-person CMS conference experience that will equip you to move forward in 2024.
At the event, see Deane Barker's exclusive Q&A on the big topics for 2024, and hear from leading voices across the industry on a wide range of topics, including:
Is there a better location for a winter kickoff than Florida's beautiful, sugary sand beaches? CMS Kickoff 2024 will be held at the iconic Don CeSar, just steps from the Gulf of Mexico. Dubbed the “Pink Palace,” this majestic hotel and resort provides a stunning backdrop to the conference, along with access to the local food and culture of St. Petersburg Beach.
CMS Kickoff offers an intimate, highly focused experience. Space is limited, and only a few seats remain. Don't miss this exclusive opportunity!