The Future

Google’s AI lab got caught up in a privacy violation

Health care providers in the U.K. improperly shared patient data with DeepMind’s app.

The Future

1.6M
The number of patients whose personal data was illegally shared with Google artificial intelligence lab DeepMind
The Future

Google’s AI lab got caught up in a privacy violation

Health care providers in the U.K. improperly shared patient data with DeepMind’s app.

The U.K.’s Information Commissioner’s Office, or ICO, today ruled that a group of British health care providers improperly handled personal data in connection to a deal with DeepMind, the Google-purchased AI company best known for the Go-playing AI AlphaGo.

The group, the Royal Free NHS Foundation Trust, was providing data for DeepMind’s Streams instant alert app in order to test a system intended to detect and diagnosis acute kidney injury. Patients were not properly informed of how their data would be used according to the standards of the Data Protection Act, ICO said.

“Patients would not have reasonably expected their information to have been used in this way, and the Trust could and should have been far more transparent with patients as to what was happening,” Information Commissioner Elizabeth Denham said in a statement.

The Data Protection Act says personal information must be handled in a way that adheres to eight principles, according to the ICO: “fairly and lawfully processed; processed for limited purposes; adequate, relevant and not excessive; accurate and up to date; not kept for longer than is necessary; processed in line with your rights; secure; and not transferred to other countries without adequate protection.”

Royal Free and DeepMind issued statements accepting the ICO’s findings.