One major variable med tech companies are faced with today is the unknown of FDA reviewers. It seems like what works for one company doesn’t necessarily work for the rest. And why is this so?
Is it because of inconsistencies in the technologies that are being developed? Or variances in the data that is being collected? Or is it inconsistencies across the spectrum of reviewers within the FDA?
A blog recently posted by the Emergo Group focused on the topic of resolving internal medical device review disagreements within the CDRH. The article explained how inconsistencies in resolving agreements lead to inconsistent rulings within the firm.
Suggested resolutions to these inconsistencies include:
- Better training of current procedures/policies. (New guidelines and guidances are always coming out of the FDA, it is important for reviewers to have full knowledge of all current content)
- Stronger definitions and clearer lines of responsibility
- Higher accountability of documentation of resolution. (Often times when there is dispute between a company and the FDA there is a lot of informal communication, proper documentation of emails/ other correspondence should be kept by the FDA, especially if a resolution is made from these types of correspondences)
In my mind, transparency is key within the FDA, specifically the CDRH. Reviewers should be made aware of certain cases that happen within other groups (different types of technologies/devices). Each reviewed submission could be a learning process for everyone within the organization.
Consistency throughout comes from learning through all instances. The more reviewers can learn from previous submissions, the more consistency there can be within the organization.
--Jillian F. Walker