Navigating a Horrifying Possibility

Even though what humans have achieved to this day remains pretty much unparalleled, the reality is that we carry a lot of flaws within ourselves. These flaws seep into everything around us, and sometimes their appearances can leave a hugely negative impact on our lives. Hence, in a bid to keep such situations from occurring again and again, we have formulated a wide assortment of ways, which are all designed to either mitigate or completely nullify the effects emerging from our imperfections. However, the attempt that really enjoyed the biggest impact, if compared to others, talked to a creation called technology. Technology wasn’t just out there looking to conceal the said flaws. Instead, right from the get-go, it has been focused at elevating our ceiling. In hindsight, we can say that the creation has achieved its goal, but is it actually the truth? While technology’s contribution in our day-to-day life is unquestionable, the generational aid does come at a cost of becoming more vulnerable than ever before, and we have got yet another example to validate the claim.

According to a recent study conducted by the researchers at University of Pittsburgh, Artificial Intelligence programs that are specifically designed to detect cancer are highly prone to cyberattacks. The study details how a computer software is all it might take for the hacker to be able to add or remove evidence of cancer from mammograms. This, as you would guess, can lead the medical professional towards dangerously incorrect diagnosis. As per some reports, the changes triggered in researching team’s experiment ended up conveniently fooling the AI tool, as well as human radiologists.

It must be noted that such attacks haven’t yet appeared on the larger horizon, but even a remote possibility should concern everyone, especially when the hacking community already boasts a long history of targeting hospitals. Some attacks have been structured mainly around stealing patient’s data, whereas plenty others have looked to take over hospital’s systems until a ransom is duly paid. Assuming the threat actors start rolling out the shots at cancer detection setup, it can give them a higher leverage, as it’s a question of life and death, after all.

Following their experiment, the researching team at University of Pittsburgh is currently turning its attention to constructing methods through which we can fight against these scenarios.

“One direction that we are exploring is ‘adversarial training’ for the AI model,” he explained. “This involves pre-generating adversarial images and teaching the model that these images are manipulated,” said Shandong Wu, senior author of the study.

Share

Related

Expanding the Automotive Footprint

There are many things that make human beings so...

Security Predictions For 2021: The Return Of Deepfakes And Malicious Insiders

Right now, it’s challenging to look forward to what...

How are Hospitals Adapting to Value-based Care?

Value-based care in the healthcare sector is nothing but...

The Surprising and Undeniable Links Between Healthcare Technology and Managing a Remote Workforce

Increased acceptance of the work from home model has...

The Hidden Price of Convenience

Even though we might not have been totally convinced...

Edge Computing Innovations to be Spotlighted at Santa Clara Conference and Tradeshow

As the world becomes increasingly reliant on instantaneous data...

Getting started with LegalTech

The legal profession is remarkably resilient. The only significant...

Recreating the Picture

The human arsenal has all sorts of elements at...

An Underwhelming Debut

Even though it can look rather impossible at times,...

Latest

No posts to display

No posts to display