Monday, May 15, 2017

Healthcare Spending in the US - What is the Problem?


thanks to fodey.com for the imageIt's not news that healthcare is expensive. Here is a quick look into what is going on. Why this is going on is a different question that I'll come back to.

I looked through World Bank WDI data (link) to visualize three spending areas that get significant attention: 
- Education expenditures
- Healthcare expenditures 
- Military spending. 

This data is limited to the United States from 1995 - 2014. Figures 1 and 2 below show the results.

Education and military spending, as a percentage of gross national product (GNI), has remained flat for the last 20 years. Education has remained around 5% and military spending has remaining around 3.5% (with a temporary jump in 2009 to almost 5%).

However, healthcare spending has risen as a percentage of GNI over this same time by 4% (from around 13% to over 17%).
Fig 1 - Expenditures as % of GNI (note: for the US, GNI and GDP are very close so I have used the two interchangeably)

When converted to dollars, the increase can be seen more clearly.
  • Education increases from ~$364 billion to $904 billion
  • Military increases from ~$271 billion to ~$627 billion
  • Healthcare increases from ~$995 billion to over $3 trillion
Fig 2- Expenditures in US$

The Centers for Medicare and Medicaid Services maintains a breakdown of healthcare spending (link). The summary of spending (PDF) shows that over 50% goes towards hospital and physician services.
Fig 3 - Breakdown of healthcare spending in the US in 2015
The CMS summary report states that:
  • Hospital care spending increased by 5.6% in 2015 while prices only increased 0.9%. This means hospital spending was driven by increased usage and intensity of services.
  • Physician services increased by 6.3% in 2015 while prices declined by 1.1%. This means that physician spending was driven by increased demand.
Taken together this suggests that most of the expenditures are occurring in areas that are being driven up by people needing more or more intensive (and expensive) care
The interesting next question is: what is driving that increased need or intensity of care and how can those root causes be addressed?


Bonus stuff: A Tableau Public visualization of this data (here).


Tuesday, May 9, 2017

The Essence of Product Life Cycle (PLC)

Fig 1 - PLC Cheat Sheet

The summary
Here is a cheat-sheet for the major steps in a product life cycle (Fig 1). It covers four ideas:

  1. What are the phases in a life cycle?
  2. What is the top level goal of each phase?
  3. Who are the key actors?
  4. What are the actors trying to do in each phase?



But what about agile?
Agile is a methodology for answering some of the questions in the life cycle. Agile is not a substitute for a proper life cycle process (more on this in a minute in "cycles repeat"). Choosing whether to follow an agile method or a more traditional waterfall method depends, I have come to believe, on the cost of developing requirements vs the cost of validating those requirements (Fig 2).

Fig 2 - Agile vs Waterfall Development depends on cost.
  • If the cost of developing your product to a point where the requirements can be tested is LOW, then it pays to adopt an agile approach. Optimize for speed to market because each iteration is cheap.
  • If the cost of development or testing is HIGH, then it pays to invest more time getting the requirements right before paying to develop and test them. Optimize for learning per unit cost because each iteration is expensive.
The specifics of how much is required to get a testable concept vary from case to case. 
For example: The cost of developing and testing a deep UV optical system to determine if it can collect enough data for the detection algorithms to flag a sub-wavelength sized pattern difference is quite high. The cost of testing different parameter entry field orders to determine which one causes more users to follow the correct setup procedure is relatively low. Hence semiconductor capital equipment hardware is not developed according to agile methods while the software that runs the hardware can be developed in an agile way.
[More on this in an earlier post.]


Cycles repeat

Fig 3 - Iterations in a Product Life Cycle
Learning at each phase will determine whether to proceed to the next phase or to return to a previous phase (fig 3).

I think the most interesting part is that there are basically only two questions here:

  1. Am I solving the right problem?
  2. Is there a better approach to solving the problem at hand?
Under the right conditions, agile is good for moving quickly through the iterations required to answer these questions. However, agile methods, in themselves, won't guide you to ask the right questions at the right time - that is what a product life cycle is for.

Extra stuff:
The slides that these images come from are embedded below.

Thursday, May 4, 2017

Rogue AI and Human Ego

In a bout of good conversation with a friend, we ended up asking the question:
How do you hold an artificial intelligence (AI) accountable for its actions?
 "Punishment!" We said; but...
How does one punish an AI?
The same way one would punish a person: Take away something that it cares about.

What does an AI care about such that taking it away will cause a change in behavior?
Why would taking something away cause a change?
What would even motivate an AI in the first place?

"hmmm...." We said...

What if an AI's motivation worked in a completely different way from a human's motivation?
What if the AI's value system was built like an insect hive's? Where no member could even conceive of the idea of performing a "bad" (i.e. independent, self-serving, coming at the cost of another) action?
Does an ant colony ever have a rogue ant problem?
(I think it safe to say that humans have rogue human problems, even without AI.)

Perhaps the rogue AI problem comes from the hubristic assumption that a "good" (i.e. functional, effective, general) AI, needs to be modeled on human intelligence? 

Perhaps, just as a fish doesn't know water, we are blind to our primate sense of fairness and justice, evolved to manage exactly the kind of intelligence we happen to have. Because of this, we can't see an alternative to the idea that a human based intelligence must come with a human based motivational system, including individuality and rule questioning behaviors.

Are we, in fact, creating the control problem by assuming that the intelligence we create should function like our own?

(Kevin Kelly has something to say about this from a slightly different angle:  AI or Alien Intelligence)