Our March meeting of Health TechNet was held on Friday, March 15 from noon to 2 pm at Nelson Mullins’ offices in Washington, D.C.
Our feature topic was artificial intelligence (AI) and machine learning: both a summary of the concept and issues it presents, as well as some new concerns relating to security that it engenders. The discussion was moderated by Joe Bormel, MD, MPH; and the primary speaker was Leo Scanlon, CISSP. Leo is currently employed by DHHS as a cybersecurity expert and has headed a number of risk management initiatives at federal agencies. He provided a basic set of definitions that clarify the terms AI, Machine Learning, and others, and how they differ; provided a quick review of the identified limitations (bias issues) in AI modeling; and then went through the basic types of attacks that can be made on these systems and models.
One issue that was explored is that extant security frameworks are data centric and provide a strong basis for assessing risk to systems which process data. AI introduces a new set of problems associated with the design and implementation of data models, which can be corrupted or attacked for the purpose of deliberately introducing bias into the model itself. There are no protocols yet for validating the integrity of data models, and this presents new challenges to academic researchers and security practitioners.