>> -- good morning.
>> OK.
>> All right.
Good morning and thank you for joining us today at the NCI CBIIT Speaker Series.
I'm Warren Kibbe.
I'm the Director of the Center for Biomedical Informatics
and Information Technology here at NCI.
As a reminder, today's presentation is being recorded
and it will be available via the CBIIT website at cbiit.nci.nih.gov.
You can find information about future speakers on that site and by following us
on Twitter at NCI underscore NCIP.
Today, we're very happy to welcome Dr. Michael Liebman,
Managing Director of IPQ Analytics, LLC,
and of Strategic Medicine, Incorporated.
The title of his presentation is Real World Medicine and Real World Patients:
Critical Understanding of Translational and Precision Medicine.
And it's particularly a pleasure for me introduce Michael because I've known
and worked on and off with Michael since the mid-1990s.
And at the beginning of our interactions,
I was doing bioremediation work and Michael was at Amoco.
He's very interested in doing computational biology--
and what now is called computational biology
and really understanding using computational tools
and power data what makes organisms sick.
And, of course, in the intervening time, he has turned more
and more to clinical applications, and I guess I'll say I have two.
So it's my pleasure to turn the presentation over to Dr. Liebman.
Michael.
>> Thank you, Warren.
And it's good to reconnect and have this opportunity to present
where we've gotten over these last years.
And then you'll see there are things that reached back at least
as far as our first connections.
So, what I'm going to present is, let's see, today,
a little bit of the issue of unknown unknowns.
And I'm going to use our disease modeling basis to give issues
and examples in a number of different categories.
And I'll show you how those tie together in just a minute.
Let me start off with a quote that I found-- I usually use at the end.
But basically in dealing with disease what we think about is the fact
that even though we have excellent technologies that we're developing
and capabilities and genomics and sequencing and the molecular biology side,
the enemy is the disease and the lack of knowledge that we have
at the same level of resolution and granularity.
And what I'd like to do is try to present some of those issues today.
It is a battle.
And if we look at Rumsfeld's comment, "There are known unknowns
and then there are unknown unknowns."
And what I'd like to try to do is expose you to some of the unknown unknowns
and take away some of that shield to let you see that there are ways
to tackle some of the bigger problems but first we have
to of course acknowledge that they even exist.
So the way we term it in terms of clinical need is the difference
between what we would call unmet clinical needs that a lot of people talk about.
That will be a new drug for an existing disease like Alzheimer's.
And that's what we call known unknown.
The issues that are unknown unknowns are what we call unstated
and unmet clinical needs.
And that's typically the areas that we don't get--
either want to address because of their complexity
or are able to necessarily speak to and
yet they're always sitting there causing problems and reducing the ability
to apply some of these new technologies
to affect the health care system the way we'd like to.
So, just a brief background, I represent a small company.
My background has been half academic and half industrial or commercial,
and I'm involved in a number of different external activities.
We work in Europe and in China quite a bit.
But basically, I'm a modeler.
I come of a theoretical chemistry background but I don't really touch
that anymore because we're looking at things from the clinical side back.
What we're interested in is understanding the process of disease
and how it works and how it doesn't work, and what we can do to address it.
And this is a pretty simplified model of how we would approach disease,
looking at the idea of risk diagnosis,
stratification that leads to treatment and then outcomes.
And of course, this is how a patient progresses through disease.
Now, over the years, even reaching further back when I first met Warren
and extending till today, we had looked across many different kinds of diseases.
And these are projects we've done either academically or commercially.
But it gives you a very broad idea of how disparate some
of these diseases actually are and the different kinds
of problems we try to deal with.
And what we've been focused on is can we abstract from this some commonalities
and work with those commonalities to build up modeling approaches,
understand what data needs to be collected, and what real world issues may exist
that need to be confronted when we're trying to come up with better solutions
or new technologies or-- and eventually even new drugs or interventions.
And so that model that I showed you before has now evolved quite a bit.
And what we're looking at right now as our disease model looks like this.
And there are many different elements here, I'm not going to go into them,
but I'm going to give you, using this model,
some issues and examples of how we've identified these problems
and actually either borrowed or developed technologies to try
to address them based on how a patient exist in the real world
and how medicine is actually practiced.
Because that presents a gap that we don't always understand or appreciate
and what I'd like to do is help expose and educate some
of those critical issues during this presentation.
So, if you'll look at this, you'll also see that this is disease agnostic.
And that is critical because that enables us to actually apply it
in many different areas very quickly.
For those of you who are interested, it's been implemented as an ontology.
And this is very high level actually, because if you look at something
like perception of risk, what you see is we've actually got
about 16 different elements that contribute
to how a patient is developing a perception of risk
and all of these factors are apparent and active in any disease
but they're weighted differently based on the disease.
So what you would weigh in terms of cancer is obviously going to be difference
in 0eart failure or difference in psoriasis.
But these factors are all active and it's only the waiting that changes.
And that's why we've incorporated these kinds of levels of resolution
into our model and into the ontology and the platform that allows us
to apply that very broadly.
This shows you an example that might be a little bit more familiar.
When we talk about currency in-- for instance, of public awareness programs,
we can see how to bring in sociologic factors and other psychological factors.
And here we'll-- this is exactly where things
like the Angelina Jolie Effect comes into play in breast cancer
but obviously not in other conditions.
That's what I mean by selective waiting, you would see.
Now, we've applied the ontology, as I said, in developing a platform.
But I'll show you this ontology,
which we also use that whole disease model replaces this segment right here.
And let me explain that when we use the term "ontology",
we're actually referring to what I call a pragmatic ontology.
So we're not talking about something at the level of resolution
of the semantic web, but we're talking about what Informa as an example,
a clinical trial team would have to evaluate before deciding
to take a clinical trial forward.
And so what we have is develop the concepts and relationships,
relationships such as you see here, in this pragmatic ontology that allows us
to very quickly generate natural language questions that work
in multidisciplinary teams.
And an example of this would be, if we're looking at the potential
for developing a clinical trial for using a specific drug in a specific disease
and some of the testing is being done in animal models.
The kinds of questions that are important
that aren't necessarily being asked right now is,
has that animal model ever produced a successful drug in that disease area,
or are there conditions that are not being reported that are being observed
in the actual patients who are receiving drugs
where that animal model has been used.
So, what we're trying to do is build an ontology from the perspective
of having concepts and relationships as opposed
to being what my academic colleagues would call a true ontology
where we really get really agnostic.
So let me start off with the first category that we are looking
at in the disease process is the concepts of risk.
Now, one of the things that we encountered very early on working
in breast cancer, of course, was the idea of looking at family histories
and potential genetic factors.
In most instances, when we talk about family history,
we would talk about potentially something that looks very much like a pedigree.
We have four offspring of a set of parents who create--
who represented generation.
But in terms of real granularity in the real world,
what we need to understand is each of those individuals,
are of the same generation, but they don't live at the same time.
They've been born at different points in time, of course, unless they're twins
or triplets or quadruplets.
But they've been exposed, therefore,
to a number of factors that may be different among those individuals.
And so when we look at the statistical analysis of a generation,
it isn't necessarily adequate to be able to incorporate changes
in diagnostic codes or standards of care or individual diseases
that may have taken place at different points in time during their development.
And so what we've done is we've developed an object-oriented data structure
that allows us to incorporate all of this.
And that's how we basically have converted over into looking
at family history even at the level of the EHR or the EHRs that we develop
as opposed to those obviously that are more conventionally being applied.
When we look at factors, then, that tie into this,
that gives us another perspective as well.
And that perspective is that we would look for risk factors
which are very common to try to consider,
but the way we ask about risk factors typically
in the clinical setting is do you smoke or how often do you smoke
and the same thing about alcohol and possibly about your weight.
But one of the things that we've observed--
and I ran a breast cancer center for the experimental side
at Windber Research Institute--
is that we have to remember that disease is a process.
And a process evolves over time.
And that's particularly critical when we're looking
at potentially chronic diseases.
And so, we need to start to incorporate the idea that risk is going
to be a function of exposure and amount of exposure over time or points in time
but we need to see how that interacts with the underlying developmental changes
in terms of physiologic differences or changes that an individual is presenting.
And so, what that means is, when we're looking at breast cancer,
we need to also understand the concepts of breast development
or the different landmarks in breast development.
And so, the way we've approached that is by looking
at the different stages of breast development.
I've only listed the major stages here but you can see in a prenatal
that there are 10 distinct stages alone in prenatal,
and then recognizing that at different stages of development we're going
to have different processes or pathways that are up or down regulated,
which means that we're going to have different levels of gene
and protein expression also being variable at those points in time.
The reason we're doing this is we'd like to be able
to collect data not just do you smoke but what is your smoking pattern
over your lifetime or your weight change or your alcohol consumption.
Because what we'd like to understand is that when is risk being presented
and what are the underlying processes that are giving rise
to that particular risk being manifested in that individual.
It is-- We can easily imagine that smoking presents certain risks
at certain ages and certain conditions but very different profiles
in other conditions and other ages.
And the goal is two-fold.
In terms of public health, it's much easier to tell someone
to control their weight at a certain period of time
because of its critical nature and impact than it is potentially to say
that they should have to control their weight over their entire lifetime
or that smoking at certain ages may have differential effects as well.
And so this is the kind of modeling that we've tried to do
with the opportunity working with our military partners at Walter Reed
to actually collect this kind of granularity of data.
And then to apply it, what we've done is look at risk not just
in the statistical sense as the Gail risk model has developed.
But now in terms of developmental features,
totally separate from those risk factors that were used by Mitchell Gail,
to understand on an individual basis and physiological development
of that individual what their personalized risk would be from parameters
that are tied to their specific development.
And what we've actually been able to do
that we are continuing doing fine is show that we can improve the risk factor
or the risk calculation using in this case with our partners,
the National Research Council of Italy
and the data from the Nationalized Healthcare--
the Nationalized Health Program,
to actually look at how we would compare risk predicting capabilities using
developmental features versus features that are based
on statistical data analysis.
So that's one of the ways that we're trying to approach risk
by bringing real world parameters
and understand how real world patients will differ.
And of course, all of these, at the end of the day,
can be tied to and integrated with the kind
of genomic profiling that's taking place now but give us a richer profile
of some of the physical characteristics of the individual based on a lot
of their personalized developmental features.
A big problem in medicine that we find, and one of the significant issues
that we think really needs to be understood are the limitations of diagnosis.
Now, I used this schematic with my clinical colleagues
who observes this pretty often.
And what I've done is I've only shown one biomarker
or one clinical variable just for simplicity.
But what we have are three different patients.
This generalizes to any number of variables or measurements.
And what we find is that two patients can look clinically or present clinically
to be very similar in terms of the standard testing that's being done.
But what you see is that they're actually on different disease paths.
And yet, they may get diagnosed similarly,
whereas this patient who is exactly the same condition and presentation
as this patient because they came in for diagnosis at a different point in time
in the disease, may be diagnosed very different than patient 1.
In fact, what you see here is a comment from Margaret Hamburger--
Hamburg that clinicians have long observed that patients with similar symptoms,
they actually have different diseases.
And so, this is a problem that clinicians have to deal with on a daily basis
and typically recognize this as an ongoing issue.
Now, we've actually generalized this a bit more.
We look at disease as being a process
and not a state even though we sometimes refer to a disease state.
That means, over time, this is a vector where that vector represents a vector
in the number of dimensions that are being clinical measured.
And so to hide dimensional vector and for simplicity,
I've only shown it as a single line.
But going back to what I just showed you before,
patients who come in at different points
in this disease process may receive a different diagnosis.
And with the diagnosis they received the appropriate treatment,
and with that treatment they'll-- they may have differential response.
Now, this is particularly of concern when we're looking at chronic diseases.
Because when we're looking at a chronic disease--
and I'll use diabetes as an example--
what we have are patients who are changing developmentally in the exposure
or environment of the disease.
And so we actually have a complicated process,
because the disease process is interacting with those development changes,
which is why as I said before, we have to understand how development ties
in to disease and how it impacts that.
Now, mathematically, we can talk about this in a much richer way
than this diagram would suggest.
The directionality of this is a high-dimensional vector.
That's the disease stratification.
How far long this vector a patient has progressed is really what staging should
be addressing.
Staging of a disease is typically based on specific clinical markers
that are easy to observe, but the reality is it's a continuum
in this disease process and understanding where that patient is,
is critical for understanding how to manage them.
But the other factor that's rarely discussed is also the velocity
of progression.
A patient who may be at an advanced stage of the disease
but progressing very slowly will typically have to be managed differently
than a patient who was early in the disease but progressing very rapidly.
And so, what we're really saying that disease can have a math--
a more mathematical representation, in fact,
looking more like a tensor than a vector,
to understand how to take apart the data but, of course, as we all know,
we don't always have the data we need to analyze this kind of rigor
in disease stratification and diagnosis.
The problem gets even more complicated when we look at the real world.
We know, as an example, that in diabetes and hypertension,
roughly 70 to 75% of patients who have one have the other.
And so, we're pretty confident when we're looking at a patient
who has either diabetes or hypertension,
that it's highly likely they will have the other
and they have to be managed accordingly.
Well, again, with our colleagues in Italy,
we did an analysis of a nationalized health record
which is part of the service in Italy.
And what we found is, on average,
these patients also had five or more other diseases.
And that's not something that's uncommon.
The comorbidities exist.
And as we've addressed it, comorbidities exist either
because they were previously diagnosed and are being managed, or we might say,
cured although they're really never cured.
Or, they may exist and our patient is under treatment or they may even be un-,
as yet, diagnosed comorbidities.
And this is a problem that we have to deal with.
But the problem gets manifested in the following way.
Everything I showed you about the complication of diagnosing
and treating a patient based on that first process,
now is further amplified in this complexity
by having the second disease process running also in that patient.
Because it's going to change, the response to diagnosis,
the response to treatment, and the overall outcome of the patient
if it's not being considered, and one of the key issues to keep in mind is
that as we'll touch on later, again, guidelines that are currently used
in clinical practice very rarely are developed with the context
of comorbid conditions being very well-addressed.
The other issue in diagnosis that we should confront is the fact
that most diseases are either complex disorders or syndromes.
And what is a syndrome and why is that a problem for possibly looking
at applying some of our genomic methodologies to patients
who are-- with a specific diagnosis?
A syndrome basically means that a series
of 10 different symptoms have been observed.
And in this example, it-- it's only required to have five of those symptoms
for a patient to receive that diagnosis.
But the problem is another patient with that same diagnosis,
may have had five different symptoms,
and yet they're still diagnosed the same way
because that's how we are classifying disease.
We're not yet able to take advantage of some
of that stratification that should take place.
And I'll show you how we've applied it in a bit,
but these are not necessarily the same patients even though they have the
same diagnosis.
And, of course, that's going to cause a problem.
The Institute of Medicine, last year,
published a report that said that there's probably a 10% error
in diagnosis that's common throughout all of diagnostic procedures.
But we would say that that is actually much higher--
or should be considered much higher--
if we consider the fact that failure
to have adequate disease stratification is added to the concept of error,
because what we know is that within that diagnostic category,
these subgroups or substrata may not respond
to the same treatment in the same way.
And so having that general diagnosis,
still is not yet accomplishing what we need to get to in refining how
to understand the complexity of the presentation of the patient.
So now let me give you an example from some
of the work we were doing in breast cancer.
This is a slide I use to show a typical H&E staining on a patient
with breast cancer who has invasive ductal carcinoma and several areas of DCIS.
In many, if not most of the reports from pathology,
the invasive ductal carcinoma, of course, is a primary diagnosis.
And depending on the pathologist,
some degree of reference to the DCIS may take place.
In our project working with the clinical breast care program from Walter Reed,
we were able to take advantage of a pathologist Jeff Hook [assumed spelling]
who had a [inaudible] not only all of the underlying abnormalities in the tissue
but also the structures that were present in a given specimen.
And one of the reasons for doing that was how and where does one
to find a tumor area for surgical procedures and how do we know
because what we found when we analyzed these areas using things
like gene expression analysis, that this DCIS did not necessarily look like DCIS
in the presence of a different primary tumor, say a typical ductal hyperplasia.
And so, with that kind of variability, even with something like DCIS,
we wanted to get an idea of what's actually present in the tissue
in the patients that were being seen in this clinical program.
What we did was we developed a co-occurrence analysis using 131 possible
pathology diagnoses and we found
that on average each report had six individual diagnoses present in that.
This shows you a contouring effect, basically a heat map,
that we generated looking at the regions in red
that shows statistically significant co-occurrence with other features.
And what you see is that one feature, in this case, invasive ductal carcinoma,
shows five different clusters of co-occurrence patterns that are present,
whereas other kinds of co-occurrence never occur or never present.
Now we've analyzed this further and we have been able
to show significant distinction between women,
and in this case we're just showing a simple incidence of pre
and post-menopausal breast cancer
because we know pre-menopausal breast cancer tends to be more aggressive.
And you can see the kinds of differences in patterns that are present
in these individuals that are suggestive of the ability
to distinguish variations in disease or additional opportunities
for disease stratification that aren't necessarily being utilized
but are present even in something like the H&E staining results.
So, what we have been able to do, is further extend that and show that some
of these groups enabled us to distinguish patients who might respond
to certain specific kinds of treatment but also using a Bayesian analysis
to start to track what we think are the patterns
of progression though these different types of structures
that were being observed in this 131 pathology classification scheme.
The other thing we started to look at was the use of Her2 testing in part
because I had been with lysis when the FISH test was being developed
and was involved in trying to do some analysis of how
to actually use the test before Herceptin existed.
Now, what we know is the FDA has two tests,
or two testing modalities that are certified for use in drug like Herceptin.
One is immuno-histochemistry and the other is FISH.
They have observed false positive and false negative rates
but they also show a significant lack of concurrence
in about 20% of the patients.
One would say we've got two tests that you can use but you have to understand
that they don't measure the same thing.
They measured two different ends potentially.
One is gene copy number and the other is the response to an antibody that looks
at protein level, two ends of a set of biological transformations.
And the reason that's critical is we're not sure all the time
which is a critical factor that should be utilized as primary.
That variation, though, is even more significant when you look
at how it distributes across the different IHC levels.
Here we have more benign disease and more aggressive disease,
and what we're observing is that the 20% is average
across all the classifications of IHC.
But what we're seeing is that is primarily variable in the intermediate ranges,
which are those that are most difficult
to determine how to manage in the patient.
And so this variation is much more desperate and starts to approach 40% rather
than the 20% overall characteristic.
So, that tells us that we need
to better understand what a Her2 test is actually looking at.
And what Her2 positive is, that gets into work that we're doing now
in triple-negative breast cancer in another new study.
So let's go now back and look at disease stratification itself.
I showed you this slide earlier
and I talked about how this vector becomes important to start
to understand how a disease subtract is progressing.
A number of years ago, we developed an algorithm that actually allowed us
to take patients and create trajectory through these patients
under different conditions and be able to use this in both of prognostic fashion
so that we could identify that a patient here left untreated was likely
to progress in certain degree or a certain manner.
But also, even more importantly,
we could start to look at how early could we have detected a patient might be
on a specific path.
And so, the idea was-- and is--
to be able to use this kind of stratification not just to understand
where a patient is headed but how early could we have detected they were
on that particular path.
Even possibly as we've seen it in some work we're doing in frailty,
at an early enough stage that we can change the management of that patient
and have a longer term impact.
Now, that led us to look at issues in biomarkers in general.
And I'll show you some of the work we did, though,
it was right around the time I met Warren.
And this is work we were doing in Amoco,
so you can understand it wasn't really oil production or refinery.
I was interested in looking at pathways.
And I'm not trained in biochemistry, so to me,
the Boehringer Mannheim wall chart is not something I ever had to memorize
so I don't necessarily have this in reference
or disdain for having to memorize that.
But what occurred to me is the fact
that you had the same factors occurring many places on that chart
and that chart was a two-dimensional projection of all the information
that optimized that ability to not have overlapped.
And that if you actually put together these factors or the substrates
and products that were the same, you have to fold that up and almost crumple it
to get that overlap to appear.
So we became interested in modeling pathways.
And actually in 1993, we published some work
with Michael Broniodez [assumed spelling] at MIT on applying pathway nets.
But what we were interested in is looking at complex physiological pathways
like this, and coagulation, of course, appears in every biochemistry textbook.
As a chemist, as a theoretical chemist,
the very first thing I would normally try to do is apply differential equations
to look at the kinetics of the individual reactions
and understand the complexity of doing that.
But what you lose in worrying the complexity
of all these equations is the experimental reality
that these rate constants are all being measured
in separate biochemical reactions frequently
under very variable biochemical conditions.
And so, we're not actually measuring the system
as a whole in terms of its behavior.
So, rather than use that deterministic approach,
we decided to start to apply a more stochastic approach
to modeling pathway behavior,
so there was a reason about how the pathway was acting.
And in doing that, be able to bring in information about either changes
in expression level, genetic mutations,
and different functions of the individual enzymes as well as how to look
at partial inhibition or how to look at control features in those pathways.
So, without going into all of the details, because they've been published,
we segmented the pathway into components, sub-networks.
We then trained the sub-networks and what you see here is the training
that looks at actual thrombin production.
And what you see that's important here is the following.
This overshoot of thrombin production is critical for actual clot formation.
But in reality, the equations that I showed you,
when we subjected them to this network model,
we're never able to produce this overshoot.
This is a real data, this is simulated data.
And what we found is, that the reason is, there's a missing feedback loop.
And that feedback loop came about by reverse engineering the data that we had
to show that without that feedback loop,
we're unable to actually achieve the behavior that's observed
in the real patient.
Now, once we've added that, the system become
as very much physiologically relevant.
We can look at things like hemophilia A and look at the different subtypes
to understand how the system can replicate that.
But more importantly, because we're interested in real conditions
that have a greater degree of unknown characteristics,
we start to look at disseminated intravascular coagulopathy or DIC.
DIC is a big problem because DIC is something
where you're throwing too much clot but you're also overstimulating the lysis.
And so it's a very hard condition to manage.
Now, it frequently occurs with trauma and the loss of a lot of fluids,
and that also can be with surgical intervention.
But when we went into literature, we found a number of other conditions
in which DIC presents that don't necessarily involve high levels of fluid loss.
And so one of the things we started to question and look to the literature
to see were, were there genetic reasons that could be associated
with a tendency to go into DIC.
What was published was that there was a potential for a factor VII sensitivity,
but there was no way we can manage factor VII sensitivity to be able
to replicate the behavior of that failure that occurs in DIC.
What we have done subsequently, though,
is we've been able to make this next step.
We look at coagulation and fibrinolysis as being in homeostatic relationship.
Basically to me, what that means is we have a buffering zone,
we have a constant ongoing process of coagulation
and fibrinolysis that's in balance.
But what happens in a patient who's going to present with DIC,
not necessarily with a fluid loss,
is that that buffering capacity is significantly reduced
because of mutations not just in coagulation but also in fibrinolysis.
And what happens is their coupling reduced the size of that buffering capacity,
and the patient is very easily and quickly able to be shifted out of
that homeostasis into a critical condition without the ability
for necessarily reversing it in a very easy or effective manner.
And now, what that's lead to is the development of some additional markers
that we're introducing to try to see who may be at risk for DIC prior
to other kinds of conditions and in terms of the military,
in terms of potential battlefield conditions.
But going back to our breast cancer problem,
we're very interested in understanding a simple process
that we don't understand very well, which is--
or that we don't manage very well, which is menopause.
And the reason for looking menopause is menopause is actually the single
menopausal stage.
It's the single highest risk factor for breast cancer since 90%
of all women present with breast cancer post-menopausally independent
of necessarily some of their other genetic markers.
What we've started to do is build a model of the HPG axis,
which led us to building really a model
that simulated what a mature ovarian follicle look like in terms
of combining a thecal and granulosa cell.
When we put the elements of the pathways together,
what we were looking at was really a steroid biosynthesis
and metabolism pathway.
So using the same method I showed you for coagulation,
we constructed this pathway and we use data that was available.
And one of the things that drove us to do this was the idea
that although we know women present with menopause on average at age 51,
we also know that it's a process that takes about 10 years to--
or transition through what we call perimenopause,
but we don't have effective measures of what's going on typically
in that other than-- in that period other than symptomology.
And yet, that transition could easily affect many other diseases
or responses to treatment that these women may be also encountering during
that time period.
So, this shows you both estradiol and progesterone concentration over time.
And in the model we've built,
this shows you the simulations
that we can achieve using the approach that I just showed you.
What's interesting to note is this is normal menstrual cycle
and this is a post-menopausal woman.
And the system which was not trained
on post-menopausal data actually can replicate the same kind
of hormonal transition over a monthly cycle that occurs
in a woman who's post-menopausal.
Even though they are extremely low in their production of estradiol,
they're not at a zero estradiol production level.
And so the system actually was able to replicate these kinds of behaviors.
Now, that, of course, started us to look at aromatase inhibitors
and their application in breast cancer.
And in particular, of course, looking at aromatase which is a CYP19A1,
one of the things we found early on at that time--
and I'm sure it's much larger now-- was that there were about 518 SNPs present.
And by typical pharmacogenomic evaluation, one of the things in terms of looking
at aromatase inhibitor response would be to look
at the pharmacogenomic markers for an individual.
The problem is, when we look at the rest of the pathway,
we found a large number of SNPs in almost every enzyme in that pathway.
And what happened was, when we did a simulation,
we found that while number of these SNPs would impact the CYP19A1 aromatase
activity, some of these other elements, some of these other SNPs would either up
or down-regulate the overall pathway behavior.
And so the complexity of an individual as we start to learn is not based
in the single enzyme and it's polymorphism, but in understanding the variation
that occurs throughout the pathway in a given individual.
We've actually used this to reverse engineer again the clinical state,
which is called FSH deficiency which produces amenorrhea
and is frequently associated with younger girls who are very athletic.
And what we were able to show is it's not an FSH deficiency,
it's an FSH receptor deficiency.
And as a result, the conventional treatment, which was using FSH,
was not able to overcome in most patients the deficiency
that was actually present in those individuals.
Let me turn quickly to physician compliance.
We know from things like the NCCN practice guidelines
that there are practice guidelines that define how diagnosis
and treatment should proceed but we also know from reality and here that 66%
of the women who are eligible for Her2 testing had no documentation of the test.
And 20% of the women receiving Herceptin were never tested based on data
that have been collected through IMS and a variety
of other states-- other sources.
We go back to that model I showed you, the hypertension and diabetes.
This also shows us just what is actually happening or what is the gap
between what we believe to be guideline-directed or evidence-based management
of patients and the real-world patient population.
We applied-- We're using as it's algorithms
that are basically graph-theoretic algorithms
or sometimes termed social network analysis,
to look at the complexity of the patients in terms of presentation
and then separately in terms of treatment.
What we found is that they were five separate communities of patients
in this group that had significant overlap of hypertension and diabetes.
And these are the codes of the drugs that were being administered
for the comorbid conditions.
So these are not the drugs that are being administered
for hypertension and diabetes.
And in over half the patients, what we were finding is that the drugs
that were being administered were being administered in spite of the fact
that they were contraindicated for diabetes and hypertension.
And so physicians were prescribing drugs because their primary treatment
in specialty was in areas outside of hypertension and diabetes.
Their practice was focused on managing those specialties.
And the guidelines themselves were not written
or developed to incorporate that type of information.
And so, this is with the nationalized health record that has been used,
that's currently used, in the country which we don't even have.
And so what we have to recognize is that even when we achieve interoperability,
we're still going to be confronted with the fact
that physicians do not necessarily practice using practice guidelines
because of the way they're developed, regardless of whether they have it
in space and whether they're developed on a consensus basis
which were the two primary manners in which they're developed.
I'll turn quickly to outcome.
This is a study that we were brought in looking at heart failure
with preserved ejection fraction.
One of the issues is if you look at ejection fraction
across the overall population,
the ability to discriminate where you have preserved
or reduced ejection fraction, as you can see,
is not a simple matter because it's a continuum.
The drug that was actually being used did not show a significant effect compared
to placebo.
But what we did was analyze the process by which a patient would be diagnosed
to understand how this could fit in to clinical practice
and how it could alter not the outcome of the drug
but understanding how the drug may work differentially in subpopulations
that were being tested as opposed to the overall response.
And so we separated out the panels that would be part
of the conventional patient workup and then analyzed them independently
and then combined them to look at this patient composite vector.
This gives you an idea of-- in the liver and kidney panels--
what we were doing, we were separating out the individual measurements
and then looking at how these could be used in this graph theory reanalysis.
This just shows you the actual graph.
But what you see is that the population actually was comprised
of five separable populations.
And these are Kaplan-Meier curves for these five populations extending out,
as you can see, a significant number of days or years.
And these populations, even though they were recruited
or included in an exclusion criteria to be essentially the same
or indistinguishable, actually do not respond the same way to either the drug
or placebo and would not actually be progressing
through the disease in the same manner.
This gets accentuated when we actually put this together with the other data.
And what we've been able to do is use this not only to look the drug response--
and subgroups and the drug response but to be able to transfer it to clinicians
for analyzing the kinds of panels they currently look at,
the typical panel as you may be familiar with if you have Chem 20 done,
is for a physician to look for outliers of individual test.
But very few of them are very good at looking at patterns of outliers,
or understanding that transition even within the normal range by looking
at the progression over time could be a key factor.
And the analysis that we've been able to show is
that these are actually key issues for understanding, again,
how early could you detect a patient is on a certain track.
The last part of this was using echocardiogram data.
The cardiologist tends to look at echocardiogram data as a heart in terms
of its overall general function.
But cardiac physiologist looks at it in terms of separable functions.
And what we've been able to do similarly, then,
is break out the individual functions and see that in separating the functions
as opposed to the typical kind of outcome
that might be used in a conventional trial.
We've been able to show differential effects,
which not only show that patients may be able to be identified
for certain specific benefits, but the drug itself because of its ability
to show certain limited effects in certain separable conditions
or physiological response may be applicable for repositioning
and other diseases beyond what it was originally intended to use.
And so this is how we've been able to start to convert things
like clinical trial data into observational studies to take advantage
of the kinds of data being collected.
So, what I hope I've been able to introduce in this time is the fact
that in spite of all of the technology that's being developed and the--
and validated, there are still lots of gaps that have to be addressed.
And that problems are not going to be simple because if we try
to create real-world medicine as its practice and real-world patients
as being simple individuals, what we're going to find is that we're not going
to come up with the right solutions.
This is exactly what we start to find when we start to deal
with targeted therapies as an example that are now starting
to identify the issues of the heterogeneity in the tumor very much as I try
to point out, you know, some of the pathology slides we are looking
at in our breast cancer patients.
So, in terms of real-world issues, the things we think are important are--
is recognizing that disease is a process that evolves over time,
that we need to improve how we consider the concept of diagnosis
to be more quantitative, which means we have to also start to collect data
that will support that kind of quantitative analysis,
that genomics is part of this but we need to understand how to put it
into its appropriate role in solving the problem of health care.
We need to recognize that clinical trial data is not clinical data
and the reason it's not clinical data is because of the selection of patients
that are being included in the trial versus the complexity
of a real-world patient with comorbidities and poly-pharmacy.
We need to understand that a lot of biomarkers are not necessarily diagnostics
because correlation is not causality.
That's one of the reasons we're trying to look
at developmental processes in some of these diseases.
Real-world patients have, as I said, comorbidities and poly-pharmacy.
Claims data is a whole another issue that we didn't start to address right now.
Hopefully, what I've been able to do is start to face reality but make sure
that you recognize all this is another perspective.
And so, while we're used to thinking of things looking this way,
this is what they really look like in the real world.
And with that, I thank you for the opportunity to present.
I acknowledge-- always have to acknowledge, first and foremost,
patients and their families who contribute most to these kinds of studies
as well as our collaborators.
And then finally, I'm happy if anyone would--
is interested in having more details or follow-up discussions.
Thank you.
>> All right.
Thank you, Dr. Liebman.
We really appreciate your presentations.
You had a quote via Albert Szent-Gyorgi which I appreciate.
And actually my favorite quote from him is about discovery.
>> Oh, yeah.
>> And discovery happens when you're looking
at the same thing everyone else looks at, but you see something different.
>> Yes.
>> And I think you're pointing out that it's important for us to look, in fact,
at the same data that we've been looking
at for a very long time from a different lens.
And I appreciate your comments on thinking about homeostasis and the impact
of multiple variants and the fact that they're integrated over your whole life.
So I think those are some wonderful observation.
At this time, I'd like to open up the floor for questions for folks in the room.
Just let us know.
We'll unmute the microphone.
Folks on WebEx, please indicate with a raised hand and we'll unmute your line.
Any questions from the-- either from the room or from the folks on the audience?
Now, we're also running out of time here.
So, if there aren't any questions, I'll close the meeting.
We hope you can join us for our next presentation, Wednesday, February 1st,
with Tina Hernandez-Boussard from Stanford University.
She'll be here to present on the speaker series.
And I thank everyone who's joined us today and special thanks
to Dr. Liebman for sharing his expertise.
[ Applause ]
>> Thanks.
[ Applause ]
>> Thank you, Michael.
>> Thank you, Warren.
Không có nhận xét nào:
Đăng nhận xét