Bits: The Week in Tech: We Might Be Regulating the Web Too Fast

Each week, we review the week’s news, offering analysis about the most important developments in the tech industry. Want this newsletter in your inbox? Sign up here.

Hi, I’m Jamie Condliffe. Greetings from London. Here’s a look at the week’s tech news:

Web regulators are getting into their groove. But are they going too quickly?

This past week, the British government proposed new powers to issue fines and make individual executives legally liable for harmful content on their platforms. My colleague Adam Satariano said it “would be one of the world’s most aggressive actions to rein in the most corrosive online content.”

Days earlier, Australia passed legislation that threatens fines for social media companies that fail to rapidly remove violent material. And there’s a growing pipeline of other internet regulation, along with existing laws like the European Union’s sweeping General Data Protection Regulation.

“We’re entering a new phase of hyper regulation,” said Paul Fehlinger, the deputy executive director of the Internet and Jurisdiction Policy Network, an organization established to understand how national laws affect the internet.

This flurry of content rules is understandable. Much of the material they would police is abhorrent, and social media’s rapid rise has caught lawmakers off guard; now the public wants something done.

But the regulations could have unintended consequences.

Difficulties in defining “harmful” mean governments will develop different standards. In turn, the web could easily look different depending on your location — a big shift from its founding principles. (This is already happening: The Chicago Tribune’s website, for example, doesn’t comply with General Data Protection Regulation, so there’s no access to it from Europe.)

There may be less visible effects. If regulation required differences at a hardware level, that could fragment the infrastructure, said Konstantinos Komaitis, a senior director at the nonprofit Internet Society, which promotes the open development and use of the internet. That could make the internet less resilient to outages and attacks.

And bigger, richer companies will find it easier to comply with sprawling regulation, which could reinforce the power of Big Tech.

“There is a major risk that we end up in a situation where short-term political pressure trumps long-term vision,” Mr. Fehlinger said.

Mr. Komaitis said avoiding unintended consequences was “very simple, yet very difficult.”

“It is all about collaboration,” he added. The idea: lawmakers work together across borders to ensure rules are more consistent.

The challenge is that collaboration could slow the pace of regulation that lawmakers desire. But Mr. Komaitis said many proposed regulations lack clear plans for implementation, and envisions snags when governments come to apply them. If they struggle, he said, collaboration and sharing of expertise may be the only way to make their plans work.

[How is technology blurring the lines between public and private? Sign up to Charlie Warzel’s new limited-run newsletter to find out more — and what you can do stop it.]

Artificial intelligence could make our lives easier and more efficient. But, like any powerful technology, it’s more complicated than that. A.I. can be used for surveillance. To control autonomous weapons. It can be biased. It could erode jobs. The list goes on.

None of those are reasons to reject A.I. outright. But they underscore how its development must be approached with care.

Big Tech has struggled to publicly demonstrate that care. Amazon, Google and Microsoft have all drawn criticism for their A.I. work with military and government agencies. Just this month, Google’s plan to create an A.I. ethics board ended disastrously when backlash about board members led to its dissolution.

Missteps should be called out, especially when they’re made by such powerful corporations. But in an emergent field, mistakes also serve as lessons. And a new set of A.I. ethics guidelines from the European Commission is a good example of how trial and error will be a fundamental part of ethical A.I. development.

The guidelines, developed by 52 experts, contain seven requirements that A.I. systems should meet to be deemed trustworthy. What stands out about them for Charlotte Stix, a policy officer at the Leverhulme Center for the Future of Intelligence at Cambridge University, is that they’re designed to be carried out.

Unlike other A.I. ethics guidelines, they attempt to join ethical principles to firm recommendations — something that has divided opinions among some people working in the field. That’s why the European Commission hopes companies will adopt and test them between now and 2020, so that they can be improved.

Frank Buytendijk, a vice president in Gartner’s data and analytics group, said the guidelines sent a message to big tech companies that may have struggled with A.I. ethics in the past: “Here’s your chance to do the right thing.”

Amazon joined the ranks of tech companies wanting to blanket the world with the internet.

Its Project Kuiper plan, which came to light in filings made by the Federal Communications Commission, would put 3,236 satellites into low Earth orbit to deliver the internet to “underserved communities around the world.” It has been likened to a network under development by SpaceX. Facebook has plans for a similar system, and Google has teamed up with the satellite operator Telesat along the same lines.

These initiatives seek to provide affordable internet connections to people who currently lack them — from Alaska to sub-Saharan Africa. Satellites appear to be a front-runner: Christopher Mims of The Wall Street Journal noted that, according to the satellite industry veteran Shayn Hawthorne, “some kind of affordable satellite internet now appears inevitable.”

For big tech companies, this is not selfless. A good example of why is the Free Basics program offered by Facebook’s nonprofit Internet.org: a zero-cost data service that provides access in some developing countries to a curated group of websites, including Facebook. In locations that are essentially untapped markets, its provision of the web can help it secure new users.

Other Big Tech players are unlikely to want to miss out.

Does privacy matter? What do companies know, and how do they know it? And what can we do about it? The New York Times is trying to answer those questions for you.

What you’ve heard about Chinese A.I. may be bluster. Jeff Ding from Oxford University says the West has overhyped China’s abilities.

China might ban Bitcoin mining. A government agency added it to a list of industries that it proposes to eliminate.

It’s all I.P.O.s. Uber and Pinterest are the next big tech unicorns to go public, in the coming month.

Amazon’s cloud might be a potent spy tool for the United States. That’s why it’s unsuitable for storing German police data, Germany’s top data protection officer told Politico.

Big Tech’s data tricks are under scrutiny. Facebook will change its terms of service to explain its use of data in response to pressure from the European Commission. And Senators Mark Warner and Deb Fischer introduced a bill aimed at clamping down on user-interface tricks that encourage data sharing.

Amazon or Microsoft will build the Pentagon’s cloud. They were the only bidders to meet the “minimum requirements” to win the $10 billion contract. One will be chosen.

YouTube had to shut down comments on a House hearing about social media. The live stream of the event, focused on white nationalism, was overrun by racist comments.

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*


one × five =