How Big Can CMS Infrastructure Get?

How big can a production CMS server get?

The short answer is as big as your blank check allows.

serverInfra

As you add users, workflow, translation, heavy publishing, and lots of opening/changing components, the CMS server size will inevitably increase.

Often times, multiple CM services are all enabled on one large machine for the enterprise, but the downsides of that are when one problem happens, all aspects of your CMS fail with it. Below I show what an infrastructure looks like that handles very large publishing loads and is scalable to add more load. The CMS also handles workflow services and translation integration with World Server.

You will have to click the image to open in a new window and zoom in to read the details.

Tridion PROD Infrastructure 2013

 

Turning on only the publishing and transport services allows you to move the rendering of pages (the publishing load causing performance hits seen to the CMS user) over to another server or pool of servers. This also allows your dedicated publishing servers to scale as publishing load increases. This means that if publishing isn’t keeping up with demand, you can spin up another machine and add it to the pool to gain throughput. The less time editors stare at “Waiting for publish” the more work they complete!

The advantage of doing this doesn’t stop at scaling. Now your users do not see a slowdown in system performance when publishing. If publishing needs restarted, then the users can still work in the CMS and publishing resumes when the server comes back up. The publishers can be restarted in series to allow for zero downtimes.

In addition to the dedicated CMS publishers, you can also repurpose a dedicated DR (Disaster Recovery) CMS box to also pick up additional capacity when not in use. If you need the DR server it is as simple as starting the rest of the services and it is back fully purposed as a DR CMS environment in minutes.

As for the DR environment, it’s a fairly simple setup. Log shipping is setup between the Live CM database and the DR CM database. If DR environment is needed, you can simply turn off the log shipping and the DR CMS will start working.

There are some additional ways to scale out as needed. If you have a lot of workflow processes running you can have a dedicated machine just to run your workflow processes. This has similar pros to the publishers.

All static files go to central data center NAS servers which serve the individual web servers with the static content.

The size of the CMS machines are fairly large machines in the Tridion world. For production, we have 3 dedicated publishing servers with 48 CPUs and 256GB memory each. This allows for an optimal 108 rendering threads and 72 deployer threads for high-speed mass publishing.

 

 

 

 

What, Why Can’t I Publish My Stuff?

You’ve set up your content delivery environment and have your topologies all ready to go. Your publication has a Business Process Type, so you go to publish your first bit of content…

What? Where are my targets?

Huh? Where’s the target? You try a CME refresh, service restarts, even a server reboot with no luck.

This is a little gotcha that Dom Cronin pointed out at TDS 2016 and which I missed. As well as creating a business process type (BPT), that BPT also needs to be specified in a publication’s properties before you can publish items to it from that publication.

So, add your BPT to your publication’s properties

Untitled-2

 

and you will have your target available when publishing items from your publication. All is now good with the world.

yay

WebSphere 8.5.* Content Delivery Issues

cover

Recently I was involved in the setup of a new content-delivery environment, migrating from a WebSphere 7.* application server to WebSphere 8.5.5. Right away when we started up the application, we started seeing some content-delivery errors with our session preview web-service which we did not experience on WebSphere 7. I’m going to review the problem, and talk about how it was resolved. Continue reading

Building and automating a scaled out Fredhopper/SmartTarget environment

While designing and building a Fredhopper/SmartTarget enterprise environment recently, a couple of interesting requirements came up. The first requirement, these days quite often asked for when building infrastructures, was that every Fredhopper component needed to be to be automatically deployed, configured and run. The second hard requirement was that in a production environment being under constant high load and being distributed across multiple data centres, the Fredhopper Index servers need to be in sync and highly available in each data centre, while having failover mechanisms causing the least amount of disruption time. After a lot of headache, trialling, erring and creating a mountain of broken Fredhopper instances in the process, we finally managed to meet requirements and this post shows the how of it.
Continue reading

Running a .Net DD4T application on Linux

DD4T .NET on Linux

 

At the Tridion Developer Summit in September Siawash Shibana and Albert Romkes gave a presentation of a DD4T .NET application running under vNext (The codename for the next .NET framework) on Linux.

Siawash and Albert made the application available publicy and although it currently uses mock SDL Web 8 provider objects plugging this into a real life content delivery service should be really simple when SDL Web 8 is released this month.

Continue reading

TridionRsaProtectedConfigurationProvider, and the Art of Zen

And it's Monday. Just sayin'.

And it’s Monday. Just sayin’.

“R-S-A protected configuration provider?! What the f#$k is that, and why the f#$k is it dying an inglorious death while taking my CM with it?!” Those words, friends, are mine. It’s a rare issue that can make me angry enough to club a baby seal silently whisper expletives at an inanimate object, but it’s Monday. And I’ve had coffee… lots of coffee. Y’see, I’ve recently been involved in upgrading an entire organization’s mission-critical servers from 2011 to 2013. For the most part, as we all hope in such circumstances, it’s been a breeze; nothing to set the pulse aflutter. Until today. And that terrible, miserable, unhelpful exception provided by ASP.NET.

Continue reading

Content Delivery on Redhat Linux with Oracle 12c – Part 1

part1

This is the first in a three part series on setting up Tridion Content Delivery on Redhat Linux with an Oracle 12c database.

Read part two of the guide that steps through the Oracle Database installation and part three that deals with the Tridion Content Delivery database installation.

SDL’s Tridion documentation does not go very deep into the set up of Content Delivery in a Linux environment and I have found little content out in the community around this. I thought that it would be valuable to create a few videos that step through the installation of a Redhat Linux server with Oracle 12c for Content Delivery for those with little Linux and Oracle experience. This isn’t intended as a set up guide for a production server (That’s what sysadmins and DBAs live for) but it will give you your own instance of a working Linux/Oracle CD environment that you can play around with.

Continue reading