arrow-backBACK
Healthcare

“Ask Me Anything” with Krzysztof Geras, PhD

By April 14, 2020

  • facebook
  • twitter
  • pinterest
  • linkedin

Krzysztof Geras
Assistant Professor
NYU Department of Radiology

Ai4's recently hosted an "Ask Me Anything" session with one of our speakers, Krzysztof Geras, on our Ai4 Slack Channel. Read the full transcription below...

MODERATOR: Hello everyone! It’s a pleasure to welcome our next AMA guest Krzysztof Geras PhD, Assistant Professor, Department of Radiology, at NYU. Full bio here. You now have one hour to ask him anything. Ready, set… GO!

KRZYSZTOF: Hi everyone! I'm very curious about your questions!

PARTICIPANT 1: What are your thoughts on working on imaging in hospitals with low resources (such as in Botswana). These hospitals would benefit from using AI, but don’t always have the resources for the best imaging technology.

KRZYSZTOF: I think places like that could benefit most from AI in healthcare. In developed countries there already is a sufficient number of highly qualified health professionals, who could also benefit from AI's assistance but it will not be a game changer for them unless AI is clearly superhuman. The deficiencies in imaging technologies can be to a large extent compensated by AI. A great example of this are the efforts on accelerating MRI acquisition with AI (check this joint NYU-Facebook AI Research project for example: https://fastmri.org/).

PARTICIPANT 2: A less technical question since I am relatively new to this field. I know there is pushback from Hospitals wanting to implement Deep Learning solutions into practice due to interpretability concerns, but how do you suppose this barrier is going to come down? What are your thoughts on how to advocate for this type of technology and get this to the Hospitals that can benefit the most?

KRZYSZTOF: I think there are multiple good reasons, including explainability (interpretability), for why hospitals are hesitant to apply deep learning in clinical practice. Primarily, we have to understand that even though it is working very well for many applications, it's not yet a fully mature technology. It could make errors in some ways that we don't expect. We need a lot of validation to understand it's behavior in non-standard circumstances. Explainability is also a big concern connected to the above. In recent years there have been a lot of efforts in deep learning to design methods that explain given neural networks or embed explainability into the prediction. These methods are getting better and better. I think it is a matter of time when they will be good enough that they the explanations they offer will be of sufficient quality. Besides, careful clinical validation, I think the quality of predictions and explanations is the best way to advocate for these methods.

PARTICIPANT 3: What do you think are the main obstacles that need to be overcome nowadays in order to popularize the use of AI techniques for imaging on hospitals?

KRZYSZTOF: I think there are a few main ones. 1. AI needs to be more robust (e.g. not be very sensitive to the distribution change, not fail quietly for incorrectly acquired data). 2. AI need to be more accurate. Once they are clearly superhuman it will be difficult to argue against them. 3. The models need to be more interpretable to inspire trust in the users. 4. The medical community needs to better understand AI. You don't trust what you don't understand. 5. There needs to be a stronger financial incentive to introduce AI on the side of hospitals. 6. There are many more that I probably forgot to mention right now. Although I am very enthusiastic about the future of AI in healthcare, there is a long road ahead of us until it will be universally accepted and widely deployed.

PARTICIPANT 4: Which forms of differential privacy/ privacy preserving machine learning techniques would you recommend be applied to uses of ML in the context of healthcare imaging?

KRZYSZTOF: That very much depends on the type of application you have in mind. It will be completely different for EHR-like data and imaging data.

PARTICIPANT 5: There are well-known studies on adversarial AI for computer vision (i.e., stop signs, gibbons). For medical imaging, this would seem to be an even bigger/more impactful issue with potential misdiagnoses, etc. Is this an issue you or your colleagues are tackling? Do you consider this a real problem? Are there any actions that hospitals would need to be aware of when implementing medical imaging AI solutions to ensure patient safety isn't compromised?

KRZYSZTOF: You touched a very broad topic. In some ways it is a little easier in healthcare. It is easier in the sense that the images are closed in the hospital system, so it is unlikely that someone will maliciously manipulate them to change the diagnosis. On the other hand, the cost of an error is very high in healthcare applications. Therefore the methods need to be more robust. This is a very popular topic of research. My group has done a little bit of such work in the context of breast cancer screening (https://arxiv.org/abs/2003.10041). This is definitely a real problem, but I believe it will be largely solved in a few years as deep learning methods improve. To ensure patients safety hospitals should validate AI for their data before deploying it on a full scale. It is also important to mention that humans also make mistakes. They just make them in ways that are more predictable to us.

PARTICIPANT 6: As I’m sure many people in this group have found, the limitation of Deep Learning capabilities is heavily restricted due to the amount of medical data we can acquire and the diversity within these datasets. What are some of the ways you, and others in the field, have been able to obtain success with such small amount of data (maybe a 1000 MR images, for example)? I’ve seen both Transfer Learning and “Transfusion” Learning, but I’d love to hear your thoughts!

KRZYSZTOF: What should be considered a small number of data is very problem-dependent. For some very difficult problems even one million images might not be enough and some relatively easy problems one thousand images is a lot of data. We are used to judging a difficulty of a problem through our own perception which is sometimes misleading. Transfer learning (understood as pre-training the network on some related task) is definitely one of the elements of the mix for small data that is going to stay. We have been using it many times in various context. Sometimes it is also possible to acquire data without labels. In such cases, the techniques you should also consider are unsupervised/self-supervised pre-training.

PARTICIPANT 7: Hello Krzysztof...Hope you are safe!
I have a few questions:

  1. What frontiers and challenges do you think are the most exciting for researchers in the field for the next 10 years? Which among them do you think would be solvable in the next 5 years (and would be great direction for PhD work)?
  2. Do you have any comments on the recent splurge of ML work on diagnosing COVID-19 using CT scans (considering that there is little scientific evidence in medical community suggesting use of CTs for the same)?

KRZYSZTOF:

  1. There are very many. I'm going to mention a few that in my opinion are interesting but this list is not by any means exhaustive: unsupervised learning, training much smaller models (in terms of the number of parameters and the number of operations), massive multi-task learning, federated learning. Each of them are a good direction for a PhD. There will be a lot of progress for each of them in the five years, but I don't expect that they will be completely solved.
  2. 2. I think AI definitely has a role to play in COVID-19. There are so many people ill that there aren't enough doctors in the world to take of them all. However, I find it unlikely that a CT scan is a good tool for diagnosing it. CT scans are expensive and there are few CT machines. I am inclined to believe that it makes a lot more sense to apply X-Ray or CT to manage patients that were already diagnosed.

PARTICIPANT 8: Besides explainability/interpretability, what other research problems in ML/AI do you think are important to be addressed in order to make AI systems more suitable for use in healthcare?

KRZYSZTOF: That very much depends on the type of data. The problems for EHR-data are very different than for imaging data. I can answer that question better for the latter. For most medical imaging applications, besides interpretability and robustness, there already exists most of the machine learning technology that would work sufficiently well. I think probably the one most serious practical question is how to train models efficiently on large data sets without explicitly sharing them between institutions. If that was enabled, the quality of the models would dramatically improve.

PARTICIPANT 9: Due to the need to use large datasets containing images from multiple sites for deep learning, do you think there is a growing need for further outlier detection as part of preprocessing in order to ensure that the data we are training our models on is correct? Are there any methods you currently use in order to check your data for outliers?

KRZYSZTOF: I would probably not call this problem "outlier detection" as you are actually talking about detecting the entire data set being an outlier. You could, however, borrow methods from outlier detection literature (for example, you could look into methods using density estimates for this purpose). I am lucky not to encounter this problem in my own work that often as almost all of the data that I'm using comes from NYU Langone, where the acquisition of the data is standardized between different sites to a large degree.

PARTICIPANT 10: Hello Krzysztof. In your opinion, when do you envision  (ex. 3 years from now) that AI will become an actual market section in the Healthcare business? (meaning: when it will grow from a niche/quaint thing that it is today to a established fully fledged meaningful/sizable market)

KRZYSZTOF: I don't expect this is going to be discrete transition. I more inclined to think that this is going to happen gradually, growing each year by approximately 30%. There are still some outstanding problems in machine learning itself, there is little trust in AI in medical professionals and there are serious legal obstacles. As these will be solved, the popularity of AI will grow. I'm very confident that the prominence of AI in healthcare will grow. Having said that, I think it is difficult to predict whether it will necessarily become a multi-billion dollar business. For example, think about calculators. They are actually incredibly sophisticated machines. Still, you can buy one for a few dollars. It not impossible to imagine that AI will become a common commodity and everyone will accept it as a new norm.

PARTICIPANT 11: The healthcare labour deficit is particularly acute in developing countries, but it is a problem across the world. Like fastMRI project and your breast cancer detection research, AI technologies would augment human radiologist, not to replace. However, by increasing the labour efficiency in healthcare, do you think AI can solve the shortage of healthcare workforce especially in developing countries?

KRZYSZTOF: Absolutely, I think this is one of the most fascinating future applications of AI in healthcare that I am personally passionate about. I think AI is a chance to democratize the access to healthcare on a scale that wasn't possible to achieve in the past.

PARTICIPANT 12: Krzysztof - what are your thoughts on data transformation techniques such as FFT, CWT, vector-looping, etc. to transform time-series data into other forms before feeding into the Deep Learning Training process.  Do you think there is value in attempting to expose features that are more human-readable in this manner, or are the networks advanced enough to see these types of patterns in the time-series data itself? Are there techniques which can combine multiple views of the same data into a single model training run?

KRZYSZTOF: That's a very specific question! I would have to take a lot longer to think about it to answer it fully. In general, I think what we have learnt from the success stories of deep learning is that neural networks are very good at discovering patterns associated with the target label. I think this usually a good practical assumption.

PARTICIPANT 13: How will COVID-19 affect AI in healthcare?

KRZYSZTOF: I expect it will accelerate it. When the resources are more scarce, we have to be more creative on how to best use them. Everyone is also willing to take more risks in such times.

PARTICIPANT 14: Breast for Science (https://www.breatheforscience.com/) is studying the link between respiratory diseases and coughing patterns in the US.
We will find a way. We always have.

PARTICIPANT 15: I have a challenge: Can you envision a situation/scenario where the current laws that we have to deal with medical malpractice (miss diagnosis, miss treatment, etc.) are unfit to deal with situations where the medical error was caused by AI?
In all the talks that i had with lawyers and C-level staff regarding malpractice and who is responsible for a mistake, i always sense a massive bias against AI, where the goalpost/metric for how a AI machine must perform must always be much higher than a specialist doctor. And I also see a tendency to wanting to remove responsibility from the persons/professionals if there is a AI machine that outputs results.

KRZYSZTOF: This is a very good practical question. I often have this kind of conversations. I think the misalignment between the current law and how we could practically deploy AI is a source of a major concern to decision makers. My prediction is that in a foreseeable future the responsibility with stay with the doctors and the AI will be used as a decision support tool in an overwhelming majority of cases.

MODERATOR: Annnnnnd that’s a wrap! Thank you, Krzysztof, for participating, and thank you everyone for your fantastic questions! Krzysztof, do you have any final plugs or messages you’d like to put in this channel before signing off?

KRZYSZTOF: Thanks for your questions everyone! Good luck with your AI endeavors!

Recent Posts

https://ai4.io/blog/2020/06/25/how-machine-intelligence-is-saving-lives/
https://ai4.io/blog/2020/06/05/the-top-use-cases-of-ai-in-hospitals/
https://ai4.io/blog/2020/06/05/how-covid-19-is-impacting-the-state-of-ai-in-hospitals/