Choice: Break up Facebook – or Take It Into Public Ownership? I Am Not Kidding

by Paul Mason

19 March 2018

Brian Solis/Flickr

Facebook let a firm called GSR scrape 50 million user profiles and sell the data to another firm, Cambridge Analytica, whose express purpose was to manipulate electoral behaviour in favour of Donald Trump. That’s the one-paragraph summary of a story that will unfold with increasing complexity this week.

Cambridge Analytica will be in the frame, basically, for lying to British MPs – and is now being investigated by the authorities in Massachusetts where it are based. But the scandal is just the latest in a series for Facebook, creating an existential moment for the world’s biggest social media corporation.

Here’s why: Facebook’s £200bn stock market valuation is not only based on its current profits. It is based on the assumption that it will go on making profits and expanding them by ‘monetising’ user data.

In fact, like all tech giants, its long-term future depends on it being able to roll out world-changing artificial intelligence (AI) applications based on its treasure trove of user data.

Yet it has long been apparent to top executives in all the tech giants that a) the current situation can’t go on, and b) the future profits cannot be realised without a massive change in the relationship between the corporations, the users and the states that regulate them.

The 50 million profiles were harvested and used without the permission of their users. Facebook says this wasn’t a breach, so it must be confident its pathetically one-sided terms and conditions, which users consent to, covered this use of data.

Facebook has compounded its problems by refusing to disclose the fact that the data was used this way, until journalists demanding answers forced the company to suddenly ban Cambridge Analytica from accessing Facebook data.

But if it’s not a breach of contract, all this means is that Facebook’s entire business model could be seen as forcing people into contracts that are not in their own interest. The same corporate outcome is likely – a loss of confidence and a threat to what in business is called your ‘social licence to operate’. Ask any tobacco or oil executive what that means.

I am personally sick of Facebook as a product. My timeline is so simultaneously chaotic and boring that I almost never look at it. But I don’t want to kick Facebook while it’s down. Instead what we need to do is start a mature conversation between the tech companies, the states that regulate them and the populations who those states are supposed to be serving.

The scandal poses acutely the problem all tech firms are going to face as we move towards a world of automation, algorithms and AI.

The crucial information for all AI applications is the identity register. You can learn that young dads buy beer and nappies at the supermarket by crunching the sales data without knowing the name and address of each dad. But artificial intelligence needs to know the identity.

To be able to design and sell healthcare, insurance, finance and other consumer services – which is the pot of gold all tech companies are searching for – you need a verifiable ID register of millions of users, and you need an explicit licence to operate.

Yet in the first 10 years of the social media revolution, Facebook – and arguably all tech giants – has simply manipulated user data for short-term profit.

At the same time it followed classic monopoly strategies of buying up technologies that could have formed an alternative to the ‘social media timeline’ we associate with Facebook.

So as people get tired of Facebook, and migrate to Instagram or WhatsApp, they are simply moving from one part of Facebook’s empire to another.

Even under classic monopoly capitalism the pattern has usually been for sectors to be dominated by four big players, competing and collaborating at the same time, sometimes running rings around regulators but then establishing a productive symbiosis with them.

In high street banking, for example, there are four big players; ditto in accountancy; ditto in the supermarket sector and the manufacture of jet engines.

But the tech sector cannot tolerate even that level of competition. The big tech firms – Facebook, Apple, Google, Samsung, Amazon, Tencent and Alibaba – form one-sector monopolies. Infocaptialism can only really tolerate one search firm, one friendship firm, one e-commerce firm. And when I say tolerate I mean encourage – because the regulatory authorities in the USA, EU and China have done zilch to discourage such massive agglomerations of power and wealth.

In turn the tech companies have adopted a policy of ‘smash and grab’: they deploy a technology into a market which, naturally, has no rules to govern how that technology works, but then challenge lawmakers to ‘come and get us’ – confident the parliaments and civil servants have neither the expertise, the will nor the interest in doing so.

The Facebook-Cambridge Analytica scandal is the moment when this has to end.

Fortunately we have three well-established tools within capitalism to end it. Regulation, breakup and nationalisation. The tech companies not only know these things are coming – their bosses privately spend a lot of time working out what they’re going to do.

Here’s how it could work. If there have to be five or six banks, airlines or major civil engineering contractors operating in each market – because of competition law – then there is nothing to stop Facebook being broken up into six pieces in each national market it operates in.

Just as I can take my money out of Natwest and open an account at HSBC within 24 hours, it should be possible for me to move my Facebook user data from one provider to another just as easily. The firms would have to meet the same standards as now, but they could compete on – say – the absence of advertising and gratuitous bullshit on my timeline, and for example on refusing to give my data to political parties I don’t like. One of the competitors could be maybe owned by its users.

That’s the kind of natural market diversity that even quite concentrated markets used to create, before we let unaccountable tech giants dictate policy.

If you think this is an off-the-wall policy, rest assured, the boardrooms of the giant corporations see it as the minimum they are going to get away with.

But there is a different option: public ownership of at least part of the technical infrastructure needed to keep social technology going.

If you’re thinking “I don’t want my user data known by the government” you are dead right. Unfortunately, the only thing that stops Facebook giving your user data to the Russian government, or the Chinese Communist party, is the democratic state you live in.

If we want to ensure giant tech corporations use our data fairly and responsibly, there already has to be rigid and enforceable state control over them – and that is what didn’t happen in the Cambridge Analytica case, because neither the British, EU nor the US governments gave a shit about enforcing data protection laws.

But in the future, when we need to unleash the massive potential of artificial intelligence to use our personal data for our own benefit and the public benefit, the most obvious way to solve this problem is through a publicly owned, state guaranteed ID registry.

If companies like Cambridge Analytica already hold 5,000 data-points on millions of individuals, it is likely they know what you are going to die of and when. Once we start sharing our healthcare and other personal data with AI providers the potential benefits are huge. The question is who reaps the benefits: an insurance company that’s going to deny you insurance or the individual user who is going to utilise knowledge generated from millions of other people’s experiences, past and present, to live a healthier and longer life?

To realise the benefits and minimise the risks of handing over large amounts of our personal data, some form of public ownership and control of the central registry might be better than simply breaking up the monopolies and using regulation to force them to compete.

Here’s how it would work: every citizen has the option of creating a contract with a centralised authority to hand over their behavioural data. The contract would be revocable at any time. The infrastructure for the exchange of data could be publicly owned – it would have to be completely under public control in any case, like the railway tracks in the half-privatised British railway system.

Then, tech corporations could compete with each other in the innovative use of such data. But the ultimate owner of the data, and controller of how it’s used, would be each member of the public – with rights guaranteed by the state itself.

This is not the same as outright nationalising Facebook but let’s remember that, in the final analysis, that is what happened in the history of capitalism to companies that created critical but unprofitable public infrastructure.

Public ownership of the ID registry combined with strict democratic control and revocability of permission over the data could – if done right – create an incentive for the tech companies to innovate; it could allow new entrants into a market designed to suppress them. And it it would empower the least powerful player in the three-cornered tech economy – the user – to take on both the corporations and states.

Build
 people-
  powered
   media.

Build people-powered media.

We’re up against huge power and influence. Our supporters keep us entirely free to access. We don’t have any ad partnerships or sponsored content.

Donate one hour’s wage per month—or whatever you can afford—today.

We’re up against huge power and influence. Our supporters keep us entirely free to access. We don’t have any ad partnerships or sponsored content.

Donate one hour’s wage per month—or whatever you can afford—today.