Like most industries, the health care sector is grappling with the uses of artificial intelligence (AI) and what AI means for the future. At the same time, many health care companies already have integrated algorithms and AI applications into their service offerings and thus should be considering compliance risks associated with those tools.
While there has been limited AI-related enforcement to date, we can predict how AI-related enforcement may develop based on previous technology-related enforcement. We likewise can anticipate how relators and enforcement agencies might use AI to detect potential fraud and develop allegations based on how technology has already been used for these purposes.[1] The bottom line is that the use of AI to identify targets for enforcement, as well as AI-related allegations, likely will continue to evolve as rapidly as the technology itself.
Use of AI in Processing or Submitting Information to Federal Health Care Programs Without Appropriate Human Oversight
Enforcement actions involving companies that use algorithms and other technologies to process, review, and submit claims have been ongoing for years, and we expect to see similar enforcement actions taken against companies using AI tools. For example, health plans using technology to review claims or to assess patient medical records for evidence of certain diagnoses already have been subject to enforcement. Typically, the fraud alleged relates to the use of algorithms or applications for...
Read Full Story:
https://news.google.com/rss/articles/CBMiT2h0dHBzOi8vd3d3Lmpkc3VwcmEuY29tL2xl...