Maybe not.
While artificial intelligence (“AI”) can streamline processes, it is still vulnerable to cyber attacks and issues.
We can help your practice get up to date and prepared to minimize breach risks from third-party vendors. Subscribe to stay current and up to date on important matters that will impact your practice. (To subscribe to our blog click here).
Adversarial images are used to test AI and determine whether the AI is trustworthy.
One recent study showed that adversarial images were able to trick an AI model developed to diagnose breast cancer. A simulated attack falsified mammogram images.
Both the model and the human experts were fooled.
If this attack were a reality and not a simulation, it could be very dangerous to patients.
It could lead to incorrect diagnoses or missed diagnoses.
While the use of AI technology has increased in recent years, the security features need to continue to adapt and increase.
AI tools should be continuously monitored and studied. Healthcare providers need to be aware of potential pitfalls of working with AI.
AI attacks must be protected against, along with other potential breaches.
If you are utilizing AI in your practice, it is essential that you protect it with adequate privacy and security controls.
Make sure that your risk assessments are continually updated to account for AI.
Know what protections are required and train staff to recognize red flags and concerns.
Contact Rickard & Associates today if you need help protecting your practice from AI liability.
We publish vital information on health law topics and news every Wednesday and Friday. To get this important information delivered directly to your mail box, click here to Subscribe.
Do you need help with updating your Business Associate Agreement or negotiating contracts with third-party vendors? We can help. Contact us today about your Business Associate Agreement, your vendor contracts or your other legal needs!