Tim Buckley Owen Will the WSJ really be free?
Jinfo Blog

19th December 2007

By Tim Buckley Owen

Item

Well – the deed is done; on December 13, News Corporation completed its acquisition of Dow Jones http://www.newscorp.com/news/news_359.html including the venerable Wall Street Journal. So will Rupert Murdoch be able to make any money out of it – and what will it mean for infopros? On the very day of News Corp’s announcement, Dow Jones’s competitor Bloomberg suggested that it might take Murdoch three years to see any profit from his new acquisition – pointing out, of course, that it also included another newspaper, Barron’s, plus the newswires and Factiva. Murdoch has already said that he intends to make online access to the WSJ free, adding in an interview on his own Fox News http://digbig.com/4wdsx that it might need up to 20 times more readers than it has at the moment to achieve the ad sales necessary to compensate for the loss of existing subscription revenue. Free access has to be good news for infopros – but of course the pricing model only works if enough people either click through on the ads, or are enticed into using other DJ services. So it can be no coincidence that Dow Jones also chose December 13 to announce new capabilities for its Client Solutions division http://digbig.com/4wdsy which incorporates targeted news delivery (customised or managed), taxonomy creation and full integration with customers’ in-house services. ‘Dow Jones's consultative approach empowers us to truly understand our customers' information needs and provide them with a solution that helps their employees be more effective in their roles,’ added Client Solutions vice president Arthur Rassias. Help for the WSJ may also be at hand from an initiative by leading news organisations to try to achieve greater control over how the search engines index and display their offerings. According to Associated Press http://digbig.com/4wdta the proposals arise from a recent dispute between Google and Agence France Presse http://web.vivavip.com/forum/LiveWire/read.php?i=780 over the search engine’s use of its news summaries, headlines and photos. Currently Google, Yahoo et al voluntarily respect the wishes of each web site they visit by looking for a robots.txt file which specifies which bits their crawlers may index – blocking individual web pages, specific directories, or even the entire site. Trouble is, the system was developed in 1994 and now it’s showing its age. So what the publishers want to do instead is extend these limitations through a new Automated Content Process Protocol http://www.the-acap.org/ which would give them more flexibility to express their terms and conditions on access and use. It would still be voluntary, and would probably eventually need to be tested in the courts. But would it restrict the routes to free news and drive users inexorably towards premium value-added news services? At this stage, only time will tell.

« Blog