I have been working with SHAP values for some time, and have had the opportunity to test a lot of applications that I had never seen on the web, but which worked very nicely and so I wanted to share with more people. This post is kind of a continuation of this other one about feature groups and correlation.
I had chosen to make a simpler post just to see if people found it useful, and because I had a good response there I wanted to share a more interesting analysis that I thought would take a little bit more…
I have been working with SHAP values for some time, and have had the opportunity to test a lot of applications that I had never seen on the web, but which worked very nicely and so I wanted to share with more people.
Since the post is meant to be more advanced stuff, I won’t give any kind of deeper introduction into what SHAP values are, besides reiterating that they give local explanations, related to specific data points. And that is where their power comes from. They are also the most formally rigorous approach to unstructured feature importance.
Recently, with the surge of increasingly complex models created to make efficient decisions based on big data, there is a lagged but also increasing requirement that we understand our models, that they are not complete black boxes. Explainability techniques try to fill this need, enabling us to peek into the boxes of complicated models such as the widely used boosting algorithms.
Nevertheless, they do this in very specific ways, so I will give a very brief overview of how they are used in practice, to next gauge their possibilities and limitations regarding responsible and fair modeling.
Although computational thinking is growing, it is still tricky to propose or discuss theories in algorithmic terms, at least in the cognitive sciences. One reason for this difficulty may be that those standing for computational thinking do not invest enough in setting the groundwork. One such investment is, at each discussion, specifying the primitive operations, to set precisely the level of abstraction. In this post, I give an illustrative example of the problem and a practical example from the neuroscience of time perception.
Take the example of the multiplication algorithm we learned at school. We can still sketch and solve…
Reinforcement learning has entered the spotlight recently with accomplishments such as AlphaGo, and is supposedly one of our best shots at Artificial General Intelligence — or at least more general intelligence. In this post, I will trace some of its history back to the study of operant conditioning by Skinner.
The real question is not whether machines think but whether men do — B. F. Skinner
Imagine yourself going to Europe or the United States, studying themes in the frontier of your field. Moreover, you are applying knowledge in a well-developed industry, and advancing a well-paid career very intellectually rewarding and demanding. After finishing your Ph.D. you can go back to your home country, where the industry does not have demand for the technical expertise of your level, or you can keep learning with top-notch researchers and advancing your career.
The dilemma should be clear: It is heavy the decision of turning your back to your home country. The country that nurtured you in this specific…
In many companies, Data Analysts spend a lot of time moving models around. For this purpose, they have to make sure dependencies are all compatible, and their environment does not mix libraries’ versions. Sometimes, people even move models’ weights around in excel spreadsheets to use inside other software, opening a huge gap between the training process and the final deployed model.
One of the reasons these practices exist is that there is no widespread protocol on how to serve models practically and without too much overhead. We wanted to use our models directly from Sklearn (or similar), in some…
Recently the CISS 2 summer school on deep learning and dialog systems took place in Lowell. It was organized by the Text Machine Lab of UMass Lowell and Neural Networks and Deep Learning Lab at MIPT (iPavlov project), Moscow. People traveled from around the world to the school, to meet top-notch researchers and learn the state of the art.
Besides the lectures and tutorials, the school held a competition between the participants, concerning team projects that we should carry out during the school. There was limited time to work on the team projects since the lectures and tutorials took up…
Master in Neuroscience and Cognition, Data Scientist @ Nubank. AI and Complexity enthusiast.