The current crisis with Facebook isn’t rooted in its
management or even in its original design. Rather, the sources of its problems go
back to its IPO, which occurred in 2012 when the company was valued at $104
billion. Specifically, Facebook’s stock was priced by Wall Street on the
assumption that it was 1) the antithesis of privacy; and 2) able to grow
revenues more or less forever at a rate substantially faster than expenses. A
corollary of this second point was that the company would remain largely free
of the inherently unpredictable costs of human employment (Wall Street hates
people costs and rewards companies that keep them as low as possible).
Facebook was valued initially – and continues to be priced
in the stock market – as a mechanism that seeks to minimize privacy. The reason
for this is simple – if more people share personal information, more
advertising can be sold. And it’s definitely better if each user shares more. Volume
matters because the cost of sharing (primarily computer storage and communications)
is a fraction of the revenue from advertising.
Much is made of the network effects of Facebook, especially
the obvious fact that multiple iterations of a “social network” runs contrary
to the purpose of networking. Sharing that is easy with a single entity becomes
burdensome when there’s a need to access multiple platforms.
And, despite what some argue, users strongly value
convenience over privacy. This is empirical. Facebook has always had options to
limit sharing but these have been largely ignored. And, there are and have been
many other vehicles for sharing photos and comments within a tightly defined
group, e.g. family. People choose Facebook (FB) over these more secure methods
because it’s more convenient and they really don’t care all that much about
privacy past a certain point.
FB’s management did stumble badly by selling user data
without permission. Multiple disclaimers from Zuckerberg and friends aside, the
simple explanation for this can be stated in one word: “greed.” But given the
blowback, that was almost certainly a one-time thing. The spotlight isn’t going
to stop shining and it’s very unlikely the same mistake will be made again. (FYI-
I don’t use FB, never have and never will. Nor do I own stock.)
The demonstrated issue of convenience explains why
separating allied platforms from FB isn’t going to work. The fact that so many
are suggesting this move reinforces the idea that future historians will refer
to current times as the High Moronic Age. The reason for variations on FB such
as Instagram and WhatsApp isn’t that people want separate systems with separate
user bases, it’s that they want a variety of tools to access the same base.
Divide things up and people will just seek ways to combine them again.
In summary, as it stands now, and given how Facebook is
valued on the stock market, the term “Facebook privacy” is an oxymoron. But
that’s what people want.
The other dimension of Facebook, its intrinsic reliance on
machines instead of people, is easier to change, but at a cost.
Facebook’s human expenditure is on engineers and other
technical staff. These can be expensive, and it’s true that some of the most
accomplished engineers will increase in cost based on scarcity. But FB needs
are mostly low level and it can hire anywhere in the world, drawing much of its
technical staff from countries with a low cost of living. In the meantime, the
amount of information handled by one technical staff person increases every
year. Overall, the expense of technical staff is assumed to decline as a share
of revenue. That means profits go up and so does the stock price.
FB never intended to do much of anything in oversight of
content. One reason for that is the ambiguity of the US’ freedom of speech
laws. Another is that the only thing that clearly did need to be overseen,
pornography, is well defined and fairly easy to recognize, which means it’s
inexpensive to control. Some considerable amount of suppressing porn can now be
done with software and that aspect of technology will likely continue to
improve.
Things like hate speech are a much more difficult
proposition. Facebook has been very reluctant to deal with speech in part
because of the lack of clarity in US law and in part because it doesn’t want to
hire a lot of people – something that’s contrary to its core economic
principles.
Now, the situation is different. US law, not to mention that
of the other 120+ nations, is still uncertain, but the public and Congress want
change so people are being banned. Predictably, some think the decisions are
good and some don’t. Where you stand depends on who you support. It’s going to
be messy for a long time.
So can software, i.e. AI (artificial intelligence) save the
day?
No.
Normal software starts with a simple kind of decision: If (this event or fact), Then (do this), Else (do that). It’s possible to string lots of these if-then-else
decisions into long complex pieces of software, usually called algorithms.
Normal algorithms are designed to handle a limited set of
conditions and are programmed to deal with all of them. For example, there are
only so many possible facts in a heating/air conditioning system and a
programmer can account for each one.
The ability to know all possible conditions doesn’t exist in
many cases, however, and that’s where AI comes in. In the most common use of
the term, “deep learning,” the software is initially trained with examples that
the programmer is familiar with and then, based on that training, is taught to
go ahead and make more rules on its own.
To illustrate, a programmer can train software on how to
recognize a stop sign, based on shape, color, and its usual position. The
training may even describe a situation in which the sign is in shadow or
obscured by a tree branch. But the training may not cover when it’s in shadow,
partially obscured, and twisted at an
angle. AI can create a rule to compensate for that, and fairly reliably.
The “fairly” is key here, however. The programmer doesn’t
know what rules the AI system has made. By their nature, these systems can’t be
tested in the lab. So, we learn what rules have been created only when
decisions are made out in the real world: e.g. the vehicle stops unexpectedly
because it thinks some similar shape is a stop sign.
This is a serious worry in considering things like
autonomous vehicles, and recent failures have caused the automobile industry to
dramatically ratchet back expectations. But it’s an impossible hurdle in the
world of speech, where variations of verbal and grammatical nuance, together
with the vast range of cultural reference, scale the number of possible rules to
unimaginable levels.
The fact is, actual people are going to have to do tasks
like evaluate hate speech. And they can’t be minimum wage types if you want
them to do a good job – it will take education and well-honed analytical skills
to make effective decisions. Finally, you’ll need teams for each culture, for
example because what’s religiously sensitive in one place might not be in
another. The teams will also have to be able to connect to each other as
needed.
So what’s the bottom line here?
First, Facebook will have to abstain from selling users’
data without their permission. It says it has done this. “Permission” will have
to be transparent and not the default. Government verification will be needed.
Second, breaking up FB is a fool’s errand. It was possible
with the telephone system to have separate companies operating independently
but still be fully connected in the sense that phone calls could move back and
forth. That isn’t going to happen with FB – break it up and market forces will
simply push for a new agglomeration. There’s a reason MySpace and Google+
failed.
Third, FB will have to reconsider its operating assumptions.
It will have to hire more and more people and will not be able to scale profits
infinitely. This will be a shock for the stock price but a very good lesson for
Wall Street as it looks toward the future.
Fourth, FB will have to ramp up its purchase of politicians;
that’s how things get done these days.
Don’t worry about this last piece of advice, though. Even if
FB does follow it, you shouldn’t expect they’ll gain much influence. At best,
they’ll tread water. The leaders in this sphere are the telecom companies
(AT&T, Verizon, et al). These companies already own lots of congresspeople
and recently managed to acquire a US Senator. When they whistle, their supposed
regulator, the FCC, sits up and begs.
The telecom companies were asleep when the Internet became
commercial and then tried and repeatedly failed to compete with Google and
Facebook. Still, they want more of the vast trove of advertising revenue and
don’t care if they have to use the power of government to make it happen. Fear
them, not Facebook.