Bits: The Week in Tech: Facebook Is Desperate to Shape Tech Regulation. Should It?

Each week, we review the week’s news, offering analysis about the most important developments in the tech industry. Want this newsletter in your inbox? Sign up here.

Hi, I’m Jamie Condliffe. Greetings from London. Here’s a look at the week’s tech news:

Mark Zuckerberg has a vision that he’d like Congress to share.

Lawmakers seem to agree that there are too few tech companies with too much power, and that Big Tech needs to be regulated. What they can’t agree on is how do it: Break companies up, tax them more, apply privacy rules, remove liability protections. The list goes on. There are lots of options, none particularly well developed, and little consensus on what might work.

Mr. Zuckerberg, the chief executive of Facebook, appeared to take advantage of this when he made his own proposal for regulation in a Washington Post op-ed. According to Corynne McSherry, the legal director of the Electronic Frontier Foundation, a digital rights group, Mr. Zuckerberg’s proposals were “an effort to get ahead” of the many suggestions already on the table and “shape what happens down the pike.”

The proposals were not earth-shattering; most had been suggested by policymakers already, or even put into effect in other parts of the world. But Mr. Zuckerberg fleshed them out into one of the more actionable calls for tech regulation that we’ve yet seen.

They may seem like an appealing first draft of the rules to some lawmakers. An obvious question is: Should they be?

“Legislation is written by legislators, and that is what they’re elected to do — no industry can replace that,” said Jason Oxman, the chief executive of the Information Technology Industry Council, a trade association. He added that “the tech industry will want to make its voice heard,” and that “it’s certainly our hope that Congress will balance a need to encourage innovation” with issues like maintaining privacy.

Ms. McSherry said she was “very concerned” about the idea that policymakers might look to the chief executives of Silicon Valley for guidance on what they should do.

“It’s sort of asking the foxes how best to guard the henhouse,” she said.

My colleague Mike Isaac read between the lines of what Mr. Zuckerberg wrote. And all of his proposals appear to suggest regulation that would strengthen Facebook — by allowing Facebook to squash competitors, say, or better enable it to meld its main platform with its other properties, Instagram and WhatsApp.

If regulation is to genuinely curb Big Tech’s power, Congress may be better served by spending less time on theatrical tech C.E.O. testimony and more time listening to the technologists who built the systems they’re trying to regulate and to the people who use them.

Don’t panic, but some stickers could help run an autonomous car off the road.

Driverless cars use artificial intelligence, often based on so-called neural networks, to interpret camera images. Such software can identify a stop sign. But small visual tweaks to an object, barely observable to humans, can fire connections in neural networks to convince them they’re looking at something else, like a speed-limit sign.

Researchers from the Keen Security Lab, part of the technology company Tencent, reported that a similar trick forced an autonomous car to switch lanes. Tesla’s Autopilot software, they showed, uses only camera data to detect lanes, and small visual tweaks to a road surface could cause its neural networks to think road markings were veering. In tests, stickers on a road caused a Tesla Model S in Autopilot mode to swerve into another lane.

Tesla said in a statement that “a driver can easily override Autopilot at any time by using the steering wheel or brakes and should always be prepared to do so.”

But it demonstrates the fragility of current A.I. “You would expect the system to behave robustly,” said Marta Kwiatkowska, a professor of computing systems at the University of Oxford. “If you change the input very slightly, the output should change only very slightly.”

Solving the problem isn’t just about improving A.I. The entire system needs to be made more robust by creating additional checks using other sensors, Ms. Kwiatkowska said. “We need to engineer these systems better,” she said.

There’s a lot of awful stuff online. Not liking it is easy. Actually overcoming it is hard.

Silicon Valley workers are aware of the problem. On Tuesday, Bloomberg published a report that said YouTube executives had ignored employee proposals to “change recommendations and curb conspiracies.” Their suggestions were reportedly “sacrificed for engagement.”

Executives, like YouTube’s chief product officer, Neal Mohan, insist that the algorithms may be designed to keep people watching but not to recommend things like extremist content, even if that’s what they end up doing. After all, it “doesn’t monetize,” he said, because advertisers “don’t want to be associated with this sort of content.”

But recommend they do, so calls for rules that force content takedowns are unsurprising. The most recent: a law passed in Australia on Thursday, which threatens big fines for companies if they fail to swiftly remove violent material.

Sadly, it may not be that easy to solve the whole harmful content problem:

■ First, defining “harmful” is a minefield. As Ms. McSherry of the Electronic Frontier Foundation put it to me, one person’s harmful content can be another’s political expression.

■ That feeds a second issue: In the United States, moves toward mandatory takedowns of specific kinds of content could quickly run afoul of the First Amendment.

■ Third, the technical challenge is huge. Tech companies have struggled for years to do it with the most troubling of content, and promises that A.I. will solve the problem have yet to bear fruit.

None of this excuses the proliferation of harmful content online. But it helps explain why progress is so slow.

Facebook user data spilled out again. Researchers found records for hundreds of millions of users stored publicly by a third party on Amazon’s cloud servers. (Also: Facebook might start a news service.)

Wall Street is getting cold feet over cryptocurrencies. The plans of big banks are faltering, showing how hard it is to take a fringe technology into mainstream finance.

What actually happens in a venture capital pitch? This, according to Wired.

Washington is preparing for a 5G future including Huawei. The world doesn’t hear America’s security concerns, so the government is reportedly coming to terms with using “dirty networks.” (Also: Chicago and Minneapolis became the first cities in the United States to have 5G.)

The biggest lobbyist for New York’s congestion charge? Uber, which spent $2 million backing the tolls. It’s trying to fend off claims that it causes congestion.

A.I. experts want Amazon to stop selling facial recognition software to law enforcement. They argue that it is biased against women and people of color.

Banning Netflix from the Oscars could violate antitrust laws. An exclusion that crimps movie sales could be anticompetitive.

Want a smart home but don’t know where to start? Allow us to be your guide.

Jamie Condliffe is editor of the DealBook newsletter. He also writes the weekly Bits newsletter. Follow him on Twitter here: @jme_c.

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*


eleven − six =