AI Walkthrough

An AI Walk-Through Healthcare over ~30 years

Introduction

Recently we have seen a lot of interest in how Artificial Intelligence can be applied to healthcare.  

It seems like this is the new gold rush — and everyone views healthcare as they great untapped opportunity.  

What I’m seeing a lot of now are recycled ideas being pitched as if no one has ever thought about it.  If you work in healthcare how many times have you heard people talk about predicting Length of Stay (LOS), Readmission or Mortality.  I wish people/researchers would start to thing beyond these basic metrics. 

There are over 10,000 references in PubMed where Length of Stay (LOS) and Predict* appear in the Title/Abstract.  The issue isn’t the creation of the models – its the deployment into the health services delivery system that is the issue.     

Reflecting back on the last ~30 years that I’ve worked in healthcare (1993 to present) – lets explore the AI experiences that I’ve had.  


Picture

~1995 (Neural Network, Classification, Severity Index)

Dr. John Jarrell, the Chief Medical Officer (CMO) at the Foothills Medical Center (Calgary, Alberta) was looking into Neural Networks.  He wanted to automatically classify patients into a Green, Yellow, Red kind of Severity Index.  This was the time I heard of AI being applied/worked on in the health deliverly system.

25 years later I reached out to him to see if he still had the software for my computer collection.  He did.  This is his original software – Neural Shell 2 and Neural Shell Prediction.  


Picture

~ 1996 (Natural Language Interface, Prolog)

While implementing the ICU Clinical Information System (QS) in the ICU and CVICU at the Foothills Medical Center – we also had a ICU data warehouse called TRACER that contained Clinical, Administrative, and Costing data.  This information was used for Quality Improvement and Clinical Research activities (we also used it for regional planning when the Calgary Regional Health Authority was created through the use of Discrete Event Simulation).

Turbo Prolog had a demonstration called GeoBase that was a Natural Language Interface (NLI) that allowed you to query for US geographical information. Questions like “which states border California?” and “what rivers run through Maine?”. 

Prolog was the computer language taught in the University of Calgary AI class. I remember my firends final project was a AI checkers program – and a fellow student trashed talked his program… but ended up losing to the “this crummy software won’t beat me” software.    

This was my first time to create a Natural Language Interface to TRACER.  This allowed questions to be posed to the dataset like “How many patients were admitted in June?”.  This theme of Natural Language Interfaces rises a couple of times.  


Picture

There were alot of books on this topic published in the 1980s.  

I still think that domain specific NLI databases could be of great value to a Knowledge/Data Scientist worker.


Picture

Although I never used these products,I thought they were interesting as they were Natural Language Interfaces to relational-databases.  Q&A was marketed by Symantec (better known for anti-virus software) and English Wizard interfaced to Microsoft Access databases.  

Then maybe around ~2002 acquired a copy of Visual Prolog and some other Prolog compilers that enabled native database access.

Even around ~2010 we interfaced Prolog to our home-built WebReports analytical software. 


Picture

~1995 Everyone Moved to New Zealand

The University of Calgary in the 1980s and early 1990s had a strong AI group. People like Brian Gaines (Knowledge acquisition), Ian Witten (creator of WEKA), Bruce MacDonald (Robotics), Jacky Baltes (Humanoid Robotics), Tony Smith. 

Upon returning to Calgary from my time in Malawi, Africa around ~2000 I started to learn that most of the AI professors we had at the University of Calgary seemed to have move to New Zealand. 

On the other hand, I became a long-time friend and collaborator of Jorg Denzinger who migrated from Germany to become one of Calgary’s few AI professors (now in ~2020 it seems like every professor says they are an aI expert now).    
  


Picture

~2001 (IBM Intelligent Miner)

IBM had a suite of products that were associated to their IBM DB2 relational database.  They included WebSphere (webserver) and several Machine Learning tools – Intelligent Miner and Intelligent Miner for Text.  These ran on AIX, HP-UX, Solaris, OS/2 Warp, and Windows NT.

IBM also produced a very good RedBook on analyzing healthcare data.  It actually provides a very good walk through on some practical examples.

This image also shows a IBM RS6000 workstation.  


Picture

~2004 PRECARN Intelligent Systems in Health Care Informatics

Around 2004 myself, Jorg Denzinger (AI Professor) and Tom Noseworthy (Health Policy) were commissioned to write a report and conduct a workshop on the application of Intelligent Systems (machine learning) to healthcare. 

PRECARN (www.precarn.ca) was a Canadian government-funded program to help companies, government research, and universities combine to produce commercial intelligent systems.   

Reflecting on the report now, it reads like alot of reports being developed.  where basically a literature search is conducted to show past “academic” application of various machine learning algorithms to healthcare.  

This also allowed me to read more about the historical foundations of AI in healthcare. 

One of the books was about MYCIN – one of the first AI applications to be developed (Expert-based system).  This book is extremely interesting — not only for it being one of the first AI systems to be used in healthcare – but also for how it opens. From: E. H. Shortliffe: 1976, /Computer-based Medical Consultation: MYCIN/, New. York: Elsevier. Page 1.

In the late 1960’s, David Rutstein wrote a monograph entitled The Coming Revolution in Medicine [Rutstein, 1967 – The Coming Revolution in Medicine.  M.I.T. Press, Cambridge, Mass. (1967)].  His discussion was based on an analysis of several serious problems for the health professions:

1) Modern medicine’s skyrocketing costs;
2) the chaos of an information explosion involving both paperwork proliferation and large amounts of new knowledge that no single physician could hope to digest;
3) a geographical maldistribution of MD’s;
4) increasing demands on the physician’s time as increasing numbers of individuals began to demand quality medical care.


Picture

~2004 Multi Agent Simulations


Picture

~2006 Expert Based Systems


Picture

~2010 Nvidia GPU (Graphical Processing Units)

I started my MSc/PhD in Health Services Research in 2009 with the goal of pursuing Machine Learning applied to Patient Flow in Acute Care Hospitals.  However, my PhD supervisor became extremely hostile to this research, even stating that Machine Learning/AI has no place in Medicine and that Logistic Regression outperforms machine learning all the time.  So, I needed to switch my PhD to his own interests (severely affecting my career goals). 

But at the same time (~2009) I met a computer scientist/medical professor doing research utilizing Nvidia GPUs.  GPUs appealed to me as since high school I had an interested in a technology called Transputers.  These were cards that contained several CPUs and allowed you to parallel processing.  

Secretly, in combination with another professor, I continued by AI interests and we were able to apply successfully for several Nvidia Academic GPU Hardware grants.

I also was able to invest in Nvidia when the stock was selling for $20 (good thing I didn’t listen to my PhD supervisor).

I later followed this up by attending several Nvidia GPU Technology Conferences in San Jose, California.  I must say that these were more of the better conferences that I have attended.  It also provided the opportunity to meet other unique people – Ian Goodfellow (creator of GANs and author of Deep Learning) and Eric Topol (author of Deep Medicine).     


Still to come….

  • Simulations
  • ​WebReports
  • Synthesis
  • Jacky – Robots
  • Machine Learning and Deep Learning

Conclusion

What am I discouraged about?

I would have to say that the biggest thing I am discouraged about is the reaction to new ideas.  If you look at the track history alot of these projects/initiatives were discouraged by others.  Even those who had the social responsibility to be receptive to innovation and new ideas were the most negative and destructive. 

I would also say that a lot of the ideas being pitched now are old recycled ideas – that have less to do with the potential of AI and more to do with traditional analysis and reporting.  Basically just applying a new algorithm to the same set of data with the same outcome reporting goal. 

What we need are people who can think of unique use cases that take advantage of the new technology.     

What am I excited about?

First, the success of AI in all these other industries. It is truly amazing how far self-driving cars have some in the last 15 years since the DARPA self-driving challenge.  When you think about it — these systems perform real-time, in a variety of environments and situations, and respond in amazing ways.

Second, the availability of software and computing resources to try new things.  No longer do you need high-end workstations or expensive software — instead one can use one of a many open-source software libraries and even make use of free cloud computing software.

Third, the availability of high-quality free online education to learn the latest techniques and methodologies.  No longer is one constrained by availability and quality of local educational resources.    

Technologies that interest me?

Surprisingly I’m not going to be saying AI — first because that simply saying AI is too broad and second because I think that AI should just be part of your standard toolkit.  Very much how Databases are a part of ones knowledge.  There are many types of databases (relational, temporal, GPU, text, etc.) and one should be familiar with which one to apply to the job.   

The following are things that I want to learn more about and apply in the next 1-2 years:

  • Anomaly detection 
  • Domain Specific Languages (DSL)
  • Temporal Databases