Deep Learning#

Deep Learning Outlook#

The depth of “deep learning” comes primarily from network architectures that stack many layers. In another sense, deep learning is very shallow since it often performs well using little to no specific knowledge about the problem it is solving, using generic building blocks.

The field of modern deep learning started around 2012 when the architectures described above were first used successfully, and the necessary large-scale computing and datasets were available. Massive neural networks are now the state of the art for many benchmark problems, including image classification, speech recognition and language translation.

However, less than a decade into the field, there are signs that deep learning is reaching its limits. Some of the pioneers and others are taking a critical look at the current state of the field:

  • Deep learning does not use data efficiently.

  • Deep learning does not integrate prior knowledge.

  • Deep learning often give correct answers but without associated uncertainties.

  • Deep learning applications are hard to interpret and transfer to related problems.

  • Deep learning is excellent at learning stable input-output mappings but does not cope well with varying conditions.

  • Deep learning cannot distinguish between correlation and causation.

These are mostly concerns for the future of neural networks as a general model for artificial intelligence, but they also limit the potential of scientific applications.

However, there are many challenges in scientific data analysis and interpretation that could benefit from deep learning approaches, so I encourage you to follow the field and experiment. Through this course, you now have a pretty solid foundation in data science and machine learning to further your studies toward more advanced and current topics!

Acknowledgments#

  • Initial version: Mark Neubauer

© Copyright 2024