I was troubled last year when my daughter came home from her new primary care doctor and was she angry. Her new doctor asked her some questions, never looked at her, touched her, listened to her heart or lungs, felt her pulse but scheduled her back for a second visit for her travel vaccinations. And yes we received a huge bill since we have a very large deductible health care insurance plan. Even more of an insult was the new premium that my daughter received from her insurance company increasing her premium by 125%. Hard to believe how the young employees can pay for their healthcare- their insurance premiums as well as their health care visits, medications, laboratory teats, etc.
I thought that this week after such weird goings on in the political arena, (are all of our politicians crazy?) that we all needed a little laugh. But article written by Dr. Glaucomflecken (name withheld due to potential the innocents laughing out loud) brings up some serious thoughts about clinical medicine, especially considering my daughter’s experience and the more and more similar experiences that my own patients relate to me almost daily.
The American Medical Association held a large funeral service today in honor of the Physical Exam, which passed away earlier this month after a decade long battle with obscurity. The funeral was well attended by nurses, medical doctors, and trainees from all over the country who wished to pay their respects. The service began with an hour-long tribute to the highlights of a centuries old career diagnosing illness, with special recognition given to the following: (these are all part of the physical exam, which is/or has previously been taught to every future doctor in medical school, but now it seems that the technology seems more important)
S3, S4, and “murmurs,” whatever those are
Palpating the point of maximal impact
Percussing the lungs
Percussing in general
Measuring liver span
Actually putting your own finger into the rectum to examine it
Doing that thing where you have the patient swallow some water then feel their thyroid
Reflexes not involving the knee
Cranial nerve I
Any part of the exam where human contact is involved
Following a display of the Physical Exam’s most prized possessions, including a stethoscope, reflex hammer and a little vial filled with coffee beans, several health care providers gave moving eulogies in remembrance of an old friend.
Tim (60-year-old internist): “I’ll miss you dear friend. My intimate knowledge about you was my only defense against the onslaught of millennial EMRs and imaging studies. I could always count on you to show me findings that, although might not have any clinical significance whatsoever, could at least be used to humiliate a resident for missing it.”
Lucy (24-year-old resident): “Dear Physical Exam, we never really knew each other. You were already pretty old and inconsequential by the time I started medical school. However, I still sometimes put on my stethoscope and listen to the strange thuds, beeps, and boops coming from inside the patient that used to be important. Hell, I’ll even write about those crazy sounds in my progress notes like I’m a real 1950s primary care doctor. But then I look at the telemetry and think to myself, technology is pretty awesome. Lolz!”
So, in which direction are we going and is it in the correct direction? Is technology, our computers, drones and robots with all the complex algorithms controlling the future of our lives and our health care? Consider the following article.
Will Algorithms Erode Our Decision-Making Skills?
Algorithms are embedded into our technological lives, helping accomplish a variety of tasks like making sure that email makes it to your aunt or that you’re matched to someone on a dating website who likes the same bands as you.
Sure, such computer code aims to make our lives easier, but experts cited in a new report by Pew Research Center and Elon University’s Imagining the Internet Center are worried that algorithms may also make us lose our ability to make decisions. After all, if the software can do it for us, why should we bother?
“Algorithms are the new arbiters of human decision-making in almost any area we can imagine, from watching a movie (Affectiva emotion recognition) to buying a house (Zillow.com) to self-driving cars (Google),” Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., says in the report. But despite advances, algorithms may lead to a loss in human judgment as people become reliant on the software to think for them.
That’s one of the conclusions made in the report. It included responses from 1,300 technology experts, scholars, businesspeople and government leaders about what the next decades hold for the future of algorithms.
One of the themes that emerged was “humanity and human judgment are lost when data and predictive modeling become paramount.” Many respondents worried that humans were considered “inputs” in the process and not real beings.
Additionally, they say that as algorithms take on human responsibilities, and essentially begin to create themselves, “humans may get left out of the loop.” And although some experts expressed concern, others gave reasons algorithms were a positive solution and should expand their role in society.
Here’s a sampling of opinions about the benefits and drawbacks of algorithms from the report: Bart Knijnenburg, an assistant professor in human-centered computing at Clemson University: “My biggest fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies and users into zombies who exclusively consume easy-to-consume items.”
Rebecca MacKinnon, director of the Ranking Digital Rights project at New America: “Algorithms driven by machine learning quickly become opaque even to their creators who no longer understand the logic being followed to make certain decisions or produce certain results. The lack of accountability and complete opacity is frightening. On the other hand, algorithms have revolutionized humans’ relationship with information in ways that have been life-saving and empowering and will continue to do so.”
Jason Hong, an associate professor at Carnegie Mellon University: “The old adage of garbage in, garbage out still applies, but the sheer quantity of data and the speed of computers might give the false impression of correctness. As a trivial example, there are stories of people following GPS too closely and ending up driving into a river.”
Amali De Silva-Mitchell, a futurist and consultant: “Predictive modeling will limit individual self-expression hence innovation and development. It will cultivate a spoon-fed population with those in the elite being the innovators. There will be a loss in complex decision-making skills of the masses.”
Marina Gorbis, executive director at the Institute for the Future: “Imagine instead of typing search words and getting a list of articles, pushing a button and getting a narrative paper on a specific topic of interest. It’s the equivalent of each one of us having many research and other assistants. … Algorithms also have the potential to uncover current biases in hiring, job descriptions and other text information.”
Ryan Hayes, owner of Fit to Tweet: “Technology is going to start helping us not just maximize our productivity but shift toward doing those things in ways that make us happier, healthier, less distracted, safer, more peaceful, etc., and that will be a very positive trend. Technology, in other words, will start helping us enjoy being human again rather than burdening us with more abstraction.”
David Karger, a professor of computer science at MIT: “The question of algorithmic fairness and discrimination is an important one but it is already being considered. If we want algorithms that don’t discriminate, we will be able to design algorithms that do not discriminate.”
Daniel Berleant, author of The Human Race to the Future: “Algorithms are less subject to hidden agendas than human advisors and managers. Hence the output of these algorithms will be more socially and economically efficient, in the sense that they will be better aligned with their intended goals. Humans are a lot more suspect in their advice and decisions than computers are.”
Isaac Asimov inspired roboticists with his science fiction and especially his robot laws. The first one says:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. Artist and roboticist Alexander Reben has designed a robot that purposefully defies that law. “It hurts a person and it injures them,” Reben says. His robot pricks fingers, hurting “in the most minimal way possible,” he says.
And the robot’s actions are unpredictable — but not random. “It makes a decision in a way that [I] as the creator cannot predict,” Reben says. “When you put yourself near this robot, it will decide whether or not to hurt and injure you.”
Though it may seem like a slightly silly experiment, Reben is making a serious point: He’s trying to provoke discussion about a future where robots have the power to make choices about human life. Something that we need to consider seriously as we see driverless cars and the possibility of the “robot-car” choosing whether the passenger deserves to live!!!!!
Reben’s robot is not very elaborate. It’s just a robotic arm on a platform, smaller than a human limb and shaped a bit like the arm on one of those excavators they use in construction — but instead of a shovel, the end has a pin. (And in case you were wondering, each needle is sterilized.)
“You put your hand near the robot and it senses you,” Reben explains. “Then it goes through an algorithm to decide whether or not it’s going to put the needle through your finger.”
I put my finger beneath the arm. The waiting is the hardest part, as it swings past me several times. Then I feel a tiny sting when it finally decides to prick me.
Reben created this robot because the world is getting closer to a time when robots will make choices about when to harm a human being. Take self-driving cars. Ford Motors recently said it planned to mass-produce autonomous cars within five years. This could mean that a self-driving vehicle may soon need to decide whether to crash the car into a tree and risk hurting the driver or hit a group of pedestrians.
“The answer might be that ‘Well, these machines are going to make decisions so much better than us and it’s not going to be a problem,’ ” Reben says. “They’re going to be so much more ethical than a human could ever be.”
But, he wonders, what about the people who get into those cars? “If you get into a car do you have the choice to not be ethical?”
And people want to have that choice. A recent poll by the MIT Media Lab found that half of the participants said they would be likely to buy a driverless car that put the highest protection on passenger safety. But only 19 percent said they’d buy a car programmed to save the most lives.
Asimov’s fiction itself ponders a lot of the gray areas of his laws. There are a total of four — the fourth one was added later as the zeroth law:
- A Robot may not harm humanity or by inaction allow humanity to come to harm.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In Asimov’s stories, the laws are often challenged by the emotional complexities of human behavior. In a screenplay derived from his famous I, Robot, the protagonist is a detective who doesn’t like robots because one had saved him in a car crash, but let the girl beside him die based on a statistical determination that she was less likely to survive.
Still, scientists in the field as a kind of inspiration and talking point often cite Asimov’s laws as we move toward a world of increasingly sophisticated machines.
“The ability to even program these laws into a fictional robot is very difficult,” Reben says, “and what they actually mean when you really try to analyze them is quite gray. It’s a quite fuzzy area.” Reben says the point of making his robot was to create urgency — to put something in the world now, before machines have those powers in self-driving cars. Will they transform into machines that can truly think and make decisions to “take over the world?”
“If you see a video of a robot making someone bleed,” he says, “all of a sudden it taps into this viral nature of things and now you really have to confront it.”
Wow, I am almost scared to consider all this. We have telemedicine and all, or most of the new regulations in medicine and more important, the new requirement that the patient’s health care, social, preventative care and planning needs to be inputted into computer Electronic Medical Records. Protocols have already been put into play to determine the care of the patient, how many days that the patient is allowed in the hospital as well as “suggestions” for appropriate care. Don’t get me wrong there are great advances in technology with activity trackers, patches that measure blood glucose, and more, which can help plan the patient’s healthcare. I think we have to be careful that we forget the diagnostic basics of interaction with our patients and the importance of listening, seeing, touching, yes even smelling and using the basic instruments, the otoscopes, stethoscopes and then utilizing the technologies at our convenience, to reaffirm our diagnoses.