One of the challenging elements of any deep learning solution is to understand the knowledge and decisions made by deep neural networks.While the interpretation of decisions made by a neural networks has always been difficult, the issue has become a nightmare with the raise of deep learning and the proliferation of large scale neural networks that operate with multi-dimensional datasets.
Knowing that neuron-12345 fired five times is relevant but not incredibly useful in the scale of the entire network.
The research about understanding decisions in neural networks has focused on three main areas: feature visualization, attribution and dimensionality reduction.
Not surprisingly, the interpretation of neural networks has become one of the most active areas of research in the deep learning ecosystem.
Try to imagine a large neural network with hundreds of millions of neurons that is performing a deep learning task such as image recognition.
Typically, you would like to understand how the network arrives to specific decisions.
Most of the current research has focused on detecting what neurons in the network have been activated.How does the new Google model for interpretability works specifically?Well, the main innovation, in my opinion, is that it analyzes the decisions made by different components of a neural network at different levels: individual neurons, connected groups of neurons and complete layers.In 1943, neurophysiologist Warren Mc Culloch and mathematician Walter Pitts wrote a paper on how neurons might work.In order to describe how neurons in the brain might work, they modeled a simple neural network using electrical circuits.ADALINE was developed to recognize binary patterns so that if it was reading streaming bits from a phone line, it could predict the next bit.MADALINE was the first neural network applied to a real world problem, using an adaptive filter that eliminates echoes on phone lines.As computers became more advanced in the 1950's, it was finally possible to simulate a hypothetical neural network.The first step towards this was made by Nathanial Rochester from the IBM research laboratories..pass_color_to_child_links a.u-inline.u-margin-left--xs.u-margin-right--sm.u-padding-left--xs.u-padding-right--xs.u-relative.u-absolute.u-absolute--center.u-width--100.u-flex-inline.u-flex-align-self--center.u-flex-justify--between.u-serif-font-main--regular.js-wf-loaded .u-serif-font-main--regular.amp-page .u-serif-font-main--regular.u-border-radius--ellipse.u-hover-bg--black-transparent.web_page .u-hover-bg--black-transparent:hover. Content Header .feed_item_answer_user.js-wf-loaded .