The last time I contributed a blog to these pages, it was about cyber security. I said then that everyone – software developers and broadcast users – needed to think very carefully about it. I will return to this subject in a few moments.

You really do not need me to tell you that the debate of the moment, way above everything else in media technology, is the move from traditional broadcast hardware to software systems running on standard computers and connected by IP. The IT industry, thanks to Moore’s Law, can now deliver sufficient processing power to be able to do all the really hard stuff we in broadcasting need.

That traditional broadcast hardware had to be designed specifically for television because no-one else needed that scale of processing. Systems engineers were constrained by what the hardware could do, and once it was assembled into a workflow that was more or less fixed.

Most important, that hardware was relatively expensive, because it required sophisticated R&D but was only of interest to a relatively small market. So building a new or replacement system was a major capital expenditure, which then had to sit in place until the capital was amortised, which typically meant that systems had an expected life of seven to 10 years.

With broadcast equipment increasingly becoming software applications running on standard hardware, our industry is looking more and more like a branch of the IT business. The benefits – like virtualisation, and fine-tuning workflows through the use of micro-services – are huge.

But we also have to accept that we now need to think more like the IT industry. This means thinking about system refresh cycles.

If you were to say to an IT veteran that you wanted to build a system and leave it completely untouched for 10 years, you would be laughed at. I’m sure that no-one is reading this on a 10 year old computer. [I’m actually writing it on a nine year old Mac Pro, but it has been heavily updated and it is due for replacement in the next few days]

I’ve already mentioned Moore’s Law, the one about computer power doubling every 18 months. Gordon Moore’s prophesy continues to hold true, so we can benefit from continual boosts in performance simply by updating the hardware regularly.

The broadcast system of the future will depend upon that continually improving hardware being host to sophisticated, specialist software that performs all the tasks that broadcasters need, from technical processes like transcoding to operational requirements like scheduling and asset management. The separation of hardware and software means that the applications can be continually improved with new releases.

We know this happens. I use an iPhone, and there is scarcely a day when I do not get an update to at least one of the applications on it. Many of us will use consumer software packages, like Microsoft Office and Adobe Creative Cloud, and again there are routine updates to improve functionality and, significantly, security.

As I said at the beginning of this piece, security is a serious issue. Over the last year we have seen cyber attacks come close to bringing down major businesses, from broadcasters to government departments. With software-defined broadcast technology inherently connected by computer links, this is now an issue which cannot be ignored. Security patches are a good thing: they show that developers are continuing to care.

So the future of broadcast systems is that they are going to be very smart software applications running on standard hardware, just like the systems used by banks or airlines or motor manufacturers. Vendors will talk of agility, of being able to add new functionality as it becomes necessary. Systems will very definitely not be financed and implemented on the basis of “design now and set in concrete for the next decade”.

So it is inevitable that the financial basis of broadcast systems will shift to a more IT-like model. Essentially they will move from capex – you pay the vendor a lot of money for a system, and maybe a notional amount for continuing support – to opex – you licence the functionality you require, with the ability to change that requirement whenever you want to.

Opex is tough for a vendor: if you have been used to collecting that big chunk of money at the point of delivery, then you have to rethink your cashflow. But the move to a licence fee gives the vendor the resources, and the driver, to maintain and refine the system, and to add new functionality that users will want to licence.

Significantly, it gives the vendor the impetus to keep the system in the forefront of its attention, so any security issues can be dealt with very quickly indeed.

Security aside, the move to a recurring licence is good for the vendor but it is even better for the user. Now systems can be tailored precisely to the requirements of the organisation, and its use can be tied directly to individual workflows and thus to revenue streams.

If you know precisely how much time a licenced function is being used on a particular project, and how much processor time that involved, you can work out the cost of doing something very accurately indeed. And that has to be good for business.

The last word goes to Ramki Sankaranarayanan, CEO of Prime Focus Technologies. At a conference session at NAB in 2017 he said “Media company CFOs ask me if it will cost more to move to opex. I tell them that the cashflow they will release can be put into content which will make them far more money than anything they may lose on the total cost of ownership.”

 

Guest blog by:

Dick Hobbs

Independent Industry Commentator and Consultant

 

Solicitar una cotización

Upload your brief and
let MSA Focus help provide
the solutions.

Consigue una cotización

Select a language