Last year, stories circulated on social media about James Haskell, the England international rugby star. They varied in detail. One suggested Haskell had died of a drug overdose, while another accused him of selling drugs suggesting his career as a rugby player was finished. What the stories shared in common was that they were all 100 per cent fake and had been posted via paid-for advertisements featured on Facebook.

The chaotic way in which the internet has grown has led to significant hitches, few more fundamental than the issue of how to separate fact from fiction. The internet is ripe for exploitation by unscrupulous individuals and companies (not to mention sovereign states) willing to prey on our appetite for tittle-tattle and self-serving falsehoods in order to peddle a chosen line or product.

Social media platforms are beginning to admit the scale of the problem they face in addressing the issue of online misinformation. Failing to deal with it adequately can have extremely far-reaching consequences as hinted at perhaps most starkly in the context of allegations of interference in the 2016 US election.

Social media sites are for many the principal source of news nowadays and with that should come significant responsibility. Despite Facebook and others indicating they are developing tools for better detection and verification of false and unlawful content, there remains much work to be done.

The first port of call for many victims will often be the social media platform’s own removal procedure. But this can be a frustratingly blunt tool, not least as it is an ineffective means of dealing with repeat offenders.

The ease with which it is possible to conceal one’s true identity on social media sites can embolden individuals to unleash a flurry of false or defamatory posts shielded by a cloak of anonymity. This can cause major difficulties in preventing the wrongdoing and taking the necessary legal action.

There are however steps that can be taken to identify anonymous users. Often the most effective step is to seek what is known as a Norwich Pharmacal order. This court procedure can be used to compel social media sites to disclose the identity of their users to enable action to be brought against wrongdoers.

Alternatively, it may be possible to bring action against the social media site itself. The current status in this jurisdiction is that social media platforms may be held liable for publication of defamatory material but only after they have been notified of its presence. Internet intermediaries such as Facebook may in certain circumstances also face liability under the Data Protection Act 1998 in respect of the processing of inaccurate information. These are challenging areas of law which are developing rapidly and a claim against corporate behemoths such as Facebook is certainly not for the faint-hearted.

Mr Haskell has spoken publically about the difficulties he has had with Facebook over the removal of the offending material and identification of those responsible. The internet echo chamber is uniquely fertile ground for the dispersal of false and defamatory material. Addressing and containing the fall-out from the publication of such material online will be one of the challenges of the internet age for tech giants and lawyers alike.

Andrew Willan is a solicitor in the Privacy & Media Law Team at Payne Hicks Beach