Navigating a Horrifying Possibility

Even though what humans have achieved to this day remains pretty much unparalleled, the reality is that we carry a lot of flaws within ourselves. These flaws seep into everything around us, and sometimes their appearances can leave a hugely negative impact on our lives. Hence, in a bid to keep such situations from occurring again and again, we have formulated a wide assortment of ways, which are all designed to either mitigate or completely nullify the effects emerging from our imperfections. However, the attempt that really enjoyed the biggest impact, if compared to others, talked to a creation called technology. Technology wasn’t just out there looking to conceal the said flaws. Instead, right from the get-go, it has been focused at elevating our ceiling. In hindsight, we can say that the creation has achieved its goal, but is it actually the truth? While technology’s contribution in our day-to-day life is unquestionable, the generational aid does come at a cost of becoming more vulnerable than ever before, and we have got yet another example to validate the claim.

According to a recent study conducted by the researchers at University of Pittsburgh, Artificial Intelligence programs that are specifically designed to detect cancer are highly prone to cyberattacks. The study details how a computer software is all it might take for the hacker to be able to add or remove evidence of cancer from mammograms. This, as you would guess, can lead the medical professional towards dangerously incorrect diagnosis. As per some reports, the changes triggered in researching team’s experiment ended up conveniently fooling the AI tool, as well as human radiologists.

It must be noted that such attacks haven’t yet appeared on the larger horizon, but even a remote possibility should concern everyone, especially when the hacking community already boasts a long history of targeting hospitals. Some attacks have been structured mainly around stealing patient’s data, whereas plenty others have looked to take over hospital’s systems until a ransom is duly paid. Assuming the threat actors start rolling out the shots at cancer detection setup, it can give them a higher leverage, as it’s a question of life and death, after all.

Following their experiment, the researching team at University of Pittsburgh is currently turning its attention to constructing methods through which we can fight against these scenarios.

“One direction that we are exploring is ‘adversarial training’ for the AI model,” he explained. “This involves pre-generating adversarial images and teaching the model that these images are manipulated,” said Shandong Wu, senior author of the study.

Share

Related

AI and Big Data Expo Announces New Speakers

AI and Big Data Expo North America will take...

Build A Robust Online Marketplace with These 3 Best Practices

Marketplaces have continued to grow in popularity over the...

Effectiveness of Logging Hospital Information

Big Data in the healthcare sectors is making ways...

2021 Security Predictions

2020 has caused tons of change in every one...

A VR Link-up of Giants

There might be a lot that human beings can...

Remote Raises over $300 Million in Series C Financing; Hits $3 Billion Valuation

As human beings, our success is defined by various...

Binance to Acquire FTX for an Undisclosed Fee; Saves the Latter from Going Bankrupt

The human arsenal might be hugely expansive in its...

Latest

No posts to display

No posts to display