Getting the right diagnosis can be as important as the right treatment. Diagnostic errors can cause harm to patients by preventing or delaying appropriate treatment, providing unnecessary or harmful treatment, and resulting in financial strains on the health care system or patient. Diagnostic errors affect one in 20 or, approximately 12 million adults each year, according to original research in BMJ Quality & Safety.
A major part of the diagnostic process involves molecular diagnostics, which helps clarify clinical pathways to proper treatment(s). Molecular diagnostics is a technique used to analyze biological markers at the genetic level. This in turn can assist clinicians in determining appropriate therapies for individual patients (i.e., personalized medicine). Molecular diagnostics can help in directing expensive therapies to patients when these therapies can actually have an effect in improving outcomes—and in the process save the health care system significant money.
The field of molecular diagnostics, however, appears to be moving much faster than the regulatory and payer environment can keep up with it. A significant portion of these diagnostics are developed as lab developed tests (LDTs), which typically use a single lab in the U.S. and proprietary testing and algorithms.
Recently, there has been significant back and forth between the FDA and the clinical community (specifically the Association for Molecular Pathology) on how these tests should be approved so that they can be introduced into the market and provide safety and efficacy for patients. As well, the clinical community and industry have become somewhat frustrated with how payment determinations are being made for these tests.
Why does this matter? The fast pace and regulatory and safety issues present several issues to the regulatory, health care payer and investor communities:
- Do these diagnostics provide clinical utility? In other words, does the test affect the clinical outcome of the patient: e.g., with reduced mortality and morbidity? Is a requirement that the test demonstrate clinical utility absolutely necessary?
- How does one define the value of such a test in order to pay for it properly? Is the current method of paying for diagnostic tests (which relies on the cost to run the test) outdated?
These two main questions continue to this day to remain unclear and have resulted early stage investment essentially drying up. Investors do not like ambiguity. This in turn has resulted in a slowdown in innovation in this field.
Today, on Jan. 5, 2015, from noon to 1:00 p.m. EST, the “Business of Health Care” show on Sirius XM Business Radio Powered by the Wharton School, channel 111, will address some of these issues with experts from the FDA, reimbursement, clinical development and market access. It should be a very interesting discussion on where we are headed and what needs to be done in order to ensure the continued growth of this important market. Tune in.
Editor’s note: Find out more about Sirius XM station 111, Business Radio Powered by the Wharton School, about subscriptions and about all of its programs by visiting https://businessradio.wharton.upenn.edu/.