top of page
Writer's pictureMedia AI Century

Microsoft CTO: To Be a Well-Informed Citizen of 21st Century, You Need to Understand AI


Microsoft CTO Kevin Scott

Microsoft CTO Kevin Scott believes understanding AI in the future will help people become better citizens.


“I think to be a well-informed citizen in the 21st century, you need to know a little bit about this stuff [AI] because you want to be able to participate in the debates. You don’t want to be someone to whom AI is sort of this thing that happens to you. You want to be an active agent in the whole ecosystem,” he said.


In an interview with VentureBeat in San Francisco this week, Scott shared his thoughts on the future of AI, including facial recognition software and manufacturing automation. He also detailed why he’s “cautiously optimistic” about the ways people will devise to use intelligent machines and why he thinks Cortana doesn’t need a smart speaker to succeed.


However vital staying informed about the evolution of AI may be to the average person in the century ahead, Scott concedes it’s not an easy thing to do.


“It’s challenging, because even if you’re a person with significant technical training, even if you’re an AI practitioner, it’s sort of challenging to keep up with everything that’s going on. The landscape is evolving really rapidly,” he said.


Technologists who make and use AI today also have a duty to help people better understand what’s possible and make their work accessible, so Scott is writing a book about how AI can be a force for good for the economy in rural America.


In recent years, AI has proliferated across health care and homes, as well as governments and businesses, and its continued expansion could redefine work roles for everyone. News and public education initiatives to help citizens understand AI are important, and technologists should make their work more accessible, but Scott believes it’s not enough for businesses using AI to be disruptive in their industry.


“We have to think about how there’s balance here,” he said. “You can’t just create a bunch of tech and have it be super disruptive and not have any involvement … you have to create value in this world, and it can’t just be shareholder value.”


A ‘cautiously optimistic’ view of facial recognition

One subject that has drawn much attention from average citizens and Microsoft is facial recognition software and the potential for government overreach.


On Tuesday, the American Civil Liberties Union (ACLU) — along with a coalition of human rights and other organizations — called for major tech companies, including Microsoft, to abstain from selling facial recognition technology to governments, because doing so would inevitably lead to misuse and discrimination against religious and ethnic minority groups.


Microsoft declined to respond directly to the letter but pointed to past actions that represent its point of view. Analysis last year found facial recognition systems from Microsoft, as well as Face++ in China, were not capable of recognizing people with dark skin, particularly women of color, at the same rates as white people. Just weeks after Microsoft made improvements to the Face API’s ability to identify people with dark skin tones last summer, president Brad Smith declared that the government needs to regulate facial recognition software. Then last month the company laid out six principles it will use to govern the use of facial recognition software by its customers, including law enforcement agencies and governments, such as fairness, transparency, and accountability.


Microsoft is currently on track to implement the plan on schedule, Scott said.


Though facial recognition software could be used for nefarious purposes by businesses and governments and can drum up fears of technologically powered police states, Scott likes to think of the upside when it comes to facial recognition software use cases.


“There’s this fine line between … that boundary; there are clearly some things that you just shouldn’t allow. Like, you shouldn’t have governments using it as a mechanism of oppression. No one should be using it to discriminate illegally against people, so I think it’s a good debate to have, but I’m usually on the cautiously optimistic side of things — I actually have faith in humanity,” he said. “I believe if you give people tools, the overwhelming majority of the uses to which they will be put are positive, and so you want to encourage that and protect against the negative in a thoughtful way.”

Potential positive use cases he cites include improving security in buildings, understanding who’s in a meeting, or verifying that a person handling dangerous machinery is certified to do so.


He also offered a theoretical example based on what he observed when his wife was in the hospital last year. Just two nurses were tasked with managing an entire a hospital recovery ward, where patients were prescribed a precise regiment of ambulatory activity.


A computer vision system assigned to this task could alert nursing staff if a patient was seen in common areas too often, signaling too much activity, or if they hadn’t been seen out of their room, indicating that they were not getting enough activity.


In addition to a belief that understanding AI makes for more informed citizens, Scott emphasized that AI experts need to do more to share the positive outcomes that can come from technology like facial recognition software.


The Terminator often comes to mind in worst-case scenarios with AI, but sharing a Star Trek vision of the future is important too, Scott said, because telling positives stories helps people grasp those possibilities.


“Folks who are deeply in the AI community need to do a better job trying to paint positive pictures for folks, [but] not in a Pollyanna way, and not ignoring the unintended consequences and all the bad things that could be amplified by AI,” he said.


Source: AI Trend

9 views0 comments

Recent Posts

See All

コメント


bottom of page