Navigating a Horrifying Possibility

Even though what humans have achieved to this day remains pretty much unparalleled, the reality is that we carry a lot of flaws within ourselves. These flaws seep into everything around us, and sometimes their appearances can leave a hugely negative impact on our lives. Hence, in a bid to keep such situations from occurring again and again, we have formulated a wide assortment of ways, which are all designed to either mitigate or completely nullify the effects emerging from our imperfections. However, the attempt that really enjoyed the biggest impact, if compared to others, talked to a creation called technology. Technology wasn’t just out there looking to conceal the said flaws. Instead, right from the get-go, it has been focused at elevating our ceiling. In hindsight, we can say that the creation has achieved its goal, but is it actually the truth? While technology’s contribution in our day-to-day life is unquestionable, the generational aid does come at a cost of becoming more vulnerable than ever before, and we have got yet another example to validate the claim.

According to a recent study conducted by the researchers at University of Pittsburgh, Artificial Intelligence programs that are specifically designed to detect cancer are highly prone to cyberattacks. The study details how a computer software is all it might take for the hacker to be able to add or remove evidence of cancer from mammograms. This, as you would guess, can lead the medical professional towards dangerously incorrect diagnosis. As per some reports, the changes triggered in researching team’s experiment ended up conveniently fooling the AI tool, as well as human radiologists.

It must be noted that such attacks haven’t yet appeared on the larger horizon, but even a remote possibility should concern everyone, especially when the hacking community already boasts a long history of targeting hospitals. Some attacks have been structured mainly around stealing patient’s data, whereas plenty others have looked to take over hospital’s systems until a ransom is duly paid. Assuming the threat actors start rolling out the shots at cancer detection setup, it can give them a higher leverage, as it’s a question of life and death, after all.

Following their experiment, the researching team at University of Pittsburgh is currently turning its attention to constructing methods through which we can fight against these scenarios.

“One direction that we are exploring is ‘adversarial training’ for the AI model,” he explained. “This involves pre-generating adversarial images and teaching the model that these images are manipulated,” said Shandong Wu, senior author of the study.

Share

Related

Breaking the Barriers to Digital Engagement for Better Collaboration in Varying Settings

Blue Square X, a leader in cutting-edge visual display...

Harnessing the True Value of Content to Rewrite the Playbook on Marketing

Deloitte Digital has officially announced a partnership with Adobe...

Is it THE END of Rumor-Mongers?

Despite the fact that technology has been a hugely...

The Phishing Mania Goes On

No matter how much you try, you just cannot...

The Rise of Automated Dispensing Cabinets in Hospitals

Human error is a huge risk factor in patient...

AI and Big Data Expo Europe: Agenda Delivers Beyond Expectations

The AI and Big Data Expo is returning to...

Muck Ruck Raises $180 Million in Series A Minority Investment; Plans to Expand Its Proprietary PRM Platform

Human beings might be good at many different things,...

Another Trouble in the Social Media Sphere

As the smartest species on the block, human beings...

Latest

No posts to display

No posts to display