In the words of Edmund Pellegrino, the current chairman of the President’s Council on Bioethics, “To advance human good and avoid harm, biotechnology must be used within ethical constraints. It is the task of bioethics to help society develop those constraints and bioethics and, therefore, must be of concern to all of us.” Indeed, the most recent December 2008 publication from the Council continues to wrestle with a challenge that has been shaped by both our remarkable advancements in the life sciences as well as the limits of even the most nuanced molecular details: newborn genetic screening.
In the United States today, it is estimated that over 4 million newborns undergo genetic screening, and many states have mandatory screening laws for a specific panel of diseases. In past years, these lists were limited to those conditions for which specific treatment or management could be initiated once a child’s status was known. In the specific case of a condition known as phenylketonuria (PKU), affected children lack sufficient enzymes required to digest phenylalanine, an amino acid component of many foods, including artificial sweeteners. Left untreated, elevated levels of phenylalanine cause irreversible and often profound mental retardation in previously healthy infants. However, when affected children are identified at birth by newborn screening, adherence to careful dietary guidelines can prevent developmental delays and cognitive impairment.
From this perspective, then, newborn screening has been one of the success stories of bridging from scientific development to carefully crafted public policy. However, as the genetic roots of more and more diseases are uncovered, the list of conditions for which newborns are screened has become longer and longer and the correlation between what we can do vs. what we should do has become murkier and murkier. In Illinois, within their first hours of life, newborns are screened for over 35 distinct genetic disorders. While this sounds like a productive way to ensure early intervention for susceptible children, the problem is that our ability to test for things has outstripped our ability to treat them. We may know with nucleotide-level precision the precise mutations responsible for genetic diseases including cystic fibrosis or phenylketonuria, but such information is often unaccompanied by any medical options to treat the disease.
In the face of such a dilemma, many unanswered questions remain. Are parents being adequately informed and counseled about the results of their child’s screening assay? How does a physician walk the line between beneficence and maleficence when a screening test comes back positive for a condition which is both poorly understood and for which no treatment is available? As scientific discovery continues to march onward, will screening results start to identify disease susceptibility and, if so, what will be the future consequences for a child tagged with a ‘pre-existing condition’ in the very DNA of his/her genome? Yet what if screening the individual for rare disorders could also provide information that would eventually be used to develop a treatment or cure to benefit society?
The public-private, science-medicine, physician-patient conundrums interwoven into the issue of newborn screening are not new to either the profession of medicine or to the field of bioethics. However, as newborn screening centers around arguably the most vulnerable members of society, it is also heartening to see this subject in all of its complexity highlighted by the President’s Council on Bioethics. Their ultimate recommendations rest on rejecting the ‘technological imperative’—testing for things because we can—in favor of the classical Wilson-Jungner screening criteria—those conditions for which intervention is likely to provide substantial benefit to the affected child. Although these recommendations are not binding, it will be interesting to monitor future changes in state policy. In the balance of risks and benefits, did the Council get it right?