Researchers have uncovered gaps in Amazon’s skill vetting process for the Alexa voice assistant ecosystem that could allow a malicious actor to publish a deceptive skill under any arbitrary developer name and even make backend code changes after approval to trick users into giving up sensitive information.

The findings were presented on Wednesday at the Network and Distributed System Security Symposium (NDSS) conference by a group of academics from Ruhr-Universität Bochum and the North Carolina State University, who analyzed 90,194 skills available in seven countries, including the US, the UK, Australia, Canada, Germany, Japan, and France.

Amazon Alexa allows third-party developers to create additional functionality for devices such as Echo smart speakers by configuring “skills” that run on top of the voice assistant, thereby making it easy for users to initiate a conversation with the skill and complete a specific task.

Chief among the findings is the concern that a user can activate a wrong skill, which can have severe consequences if the skill that’s triggered is designed with insidious intent.

The pitfall stems from the fact that multiple skills can have the same…

http://feedproxy.google.com/~r/TheHackersNews/~3/tPZAPoltEdk/alert-malicious-amazon-alexa-skills-can.html

Leave a Reply